search for




 

Evaluating AI Models and Predictors for COVID-19 Infection Dependent on Data from Patients with Cancer or Not: A Systematic Review
Korean J Clin Pharm 2024;34(3):141-154
Published online September 30, 2024
© 2024 Korean College of Clinical Pharmacy.

Takdon Kim1 and Heeyoung Lee2,3*

1Clinical Trials Center, Chungnam National University Hospital, Daejeon 35015, Republic of Korea
2College of Pharmacy, Inje University, Gimhae 50834, Republic of Korea
3Inje Institute of Pharmaceutical Sciences and Research, Inje University, Gimhae 50834, Republic of Korea
Correspondence to: Heeyoung Lee, College of Pharmacy, Inje University, Gimhae 50834, Republic of Korea
Tel: +82-55-320-3328, Fax: +82-55-320-3328, E-mail: phylee1@inje.ac.kr
Received May 3, 2024; Revised June 13, 2024; Accepted June 14, 2024.
This is an Open Access journal distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Background: As preexisting comorbidities are risk factors for Coronavirus Disease 19 (COVID-19), improved tools are needed for screening or diagnosing COVID-19 in clinical practice. Difficulties of including vulnerable patient data may create data imbalance and hinder the provision of well-performing prediction tools, such as artificial intelligence (AI) models. Thus, we systematically reviewed studies on AI prognosis prediction in patients infected with COVID-19 and existing comorbidities, including cancer, to investigate model performance and predictors dependent on patient data. PubMed and Cochrane Library databases were searched. This study included research meeting the criteria of using AI to predict outcomes in COVID-19 patients, whether they had cancer or not. Preprints, abstracts, reviews, and animal studies were excluded from the analysis. Majority of non-cancer studies (54.55 percent) showed an area under the curve (AUC) of >0.90 for AI models, whereas 30.77 percent of cancer studies showed the same result. For predicting mortality (3.85 percent), severity (8.33 percent), and hospitalization (14.29 percent), only cancer studies showed AUC values between 0.50 and 0.69. The distribution of comorbidity data varied more in non-cancer studies than in cancer studies but age was indicated as the primary predictor in all studies. Non-cancer studies with more balanced datasets of comorbidities showed higher AUC values than cancer studies. Based on the current findings, dataset balancing is essential for improving AI performance in predicting COVID-19 in patients with comorbidities, especially considering age.
Keywords : Artificial intelligence models, cancer, comorbidity, coronavirus disease-19, non-cancer
Body

Coronavirus Disease 2019 (COVID-19) was first detected in December 2019 and has spread rapidly in most cities and countries worldwide.1) Despite the expiration of the public health emergency declaration,2) the number of patients hospitalized for COVID-19 continues to increase, resulting in over six million deaths globally.3) Faced with this global health emergency, patients with various comorbidities, including cancer, have shown life-threatening outcomes after COVID-19 infection.4) Some patients with comorbidities stemming from various treatments might cause more immunosuppressed status.5) Patients with cancer have a 2.25-fold higher risk of mortality, including increased rates of hospital admission or mortality, compared to patients without cancer.6) Considering the immunosuppressed status of some vulnerable populations on virus infections, previous studies aimed to provide precise tools for screening or predicting prognosis of individuals with underlying health conditions such as cancer. However, the limitation in recruiting vulnerable patients, which resulted in an uneven dataset in the analysis, hindered the provision of consistent outcomes for screening risk factors or predicting prognostic conditions in previous studies.7) Despite these limitations, in clinical practice there is a consistent need for improved diagnostic or predictive methods for COVID-19 in patients with deteriorating health conditions. In particular, pre-existing comorbidities are well-known risk factors closely associated with increased mortality among COVID-19 infected patients.8) Nevertheless, the heterogeneous levels of data granularity regarding vulnerable health conditions, such as cancer, within specific subpopulations among the collected samples restricted the generation of accurate estimates since current statistical methods provided insufficient prognostic or diagnostic information.9) Still, as the outcome severity is greater in patients with cancer than in patients without cancer infected with COVID-19,5) there is an unmet need for screening or predicting outcomes in patients with cancer, especially compared to other vulnerable patients with COVID-19 infection and various other health conditions, despite existing imbalanced dataset issues.

As artificial intelligence (AI) has contributed to clinical decision-making and disease diagnosis,10) it has supported real- time inference for health-risk alerts and prediction of health outcomes.10) With significant discriminatory insights into AI in healthcare, during the pandemic era, various patient data were used in AI studies to provide efficient AI models and precise predictors of COVID-19 infection.11,12) However, as in previous studies, the performance of AI models was significantly influenced by imbalanced data, which could not be exempt from unbalanced datasets in comorbidities.7,13) Therefore, it is necessary to evaluate the impact of imbalanced datasets affected by comorbidity diversity on the performance of AI models and predictors among patients infected with COVID-19. In particular, considering the difficulties of enrolling vulnerable patients in trials, such those with cancer,14) it is necessary to assess the effect of including patients with cancer among those with COVID-19 on the performance of AI models through comparison with datasets without cancer.

Therefore, the current systematic review was conducted to evaluate the performance of AI models and predictors of COVID-19 in patients with pre-existing comorbidities, comparing those with and without data of patients with cancer.

Materials and Methods

This systematic review was conducted based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.15)

Data sources and search strategy

The PubMed and Cochrane Library databases were searched for eligible articles up until May 2023. A manual search was conducted to identify studies evaluating AI models and to provide important predictors or values for clinical outcomes, such as mortality, severity, hospitalization, and mechanical ventilation, for datasets including patients with or without cancer. Titles and abstracts were distinguished using the following terms to categorize the associated text: “Cancer”, “COVID-19”, “AI”, “Cardiovascular disease (CVD)”, “diabetes”, or “mortality”, “severity”, “hospitalization”, or “mechanical ventilation.”

Study selection

The investigators initially evaluated titles and abstracts to identify potentially relevant studies. To qualify for inclusion in this study, studies were required to meet the following criteria: 1) studies using data from patients with or without cancer diagnosed with COVID-19; 2) studies using AI, such as AI, machine learning, and deep learning; and 3) studies using AI models to predict mortality, severity, hospitalization, or mechanical ventilation. Preprinted works, abstracts, reviews, systematic reviews, meta-analyses, books, and animal studies were excluded. Our study incorporated various studies aimed at predicting the severity of Covid-19. The definition of severity in this context varied across the included studies. Disagreements between the two investigators were mutually resolved.

Data extraction

Two investigators extracted data from the selected literature. Initially, information was included regarding the first author, source of data, number of patients included in the data, inclusion or exclusion criteria, endpoints, types of AI models, performance metrics of the AI models, and important predictors or values. If AI models were developed during a study, we classified these new AI models as “self-developed model.” In addition, based on the data source, datasets used in individual studies were classified as “hospital data” and “public data” if the source was hospital based or publicly available, respectively. These measures involved a comparison between studies that included datasets containing information about patients with cancer (referred to as a “cancer study”) and studies that excluded patients with cancer from their datasets (referred to as a “non-cancer study”). The comparison was based on the predicted values, including mortality, severity, hospitalization, and mechanical ventilation. These “cancer study” or “non-cancer study” groups comprised patients diagnosed with COVID-19 and had one or more comorbidities. The current study measured the performance metrics of the AI models, which included the evaluation of models used more than twice. Additionally, the five most important variables, including underlying diseases that served as predictors of mortality and severity, were evaluated using the included data. Important predictors are variables with a high ranking for prediction among the variables in the prognostic prediction model. We collected the top five variables based on age, sex, or underlying diseases that appeared in the included articles. Underlying diseases were classified based on “cardiovascular”, “endocrine”, “respiratory”, “gastrointestinal”, “psychological”, “neurological”, “cancer”, or “others.” The variables of the underlying diseases were categorized according to the relevant diagnosis or treatment. If the diagnosis or treatment was not specifically included in any of the underlying disease categories, the variable was included in the “other” category.

Data synthesis

To compare the performance of the AI model between cancer and non-cancer studies based on the AUC range, the proportion of included studies was described as a percentile. The effect size for the AUC of the machine learning models, expressed as the mean difference and standard deviation, was calculated. A heat map was created to illustrate the important variables and analyze the frequency of studies stipulating important underlying diseases, according to age and sex, as the top five important variables. In addition, the percentages were described based on comorbidities in both cancer and non-cancer studies. Analyses were conducted using Microsoft Excel and R software (version 4.3.1).

Results

Study Selection

From the search results, we obtained 1629 articles from PubMed and 213 studies from the Cochrane Library. After eliminating forty duplicate studies, an additional 1,334 research articles were excluded following the screening of titles and abstracts, in accordance with the inclusion and exclusion criteria for the present study. Finally, sixty-three suitable articles16-78) were included in the analysis, which were divided into forty- five articles16-60) of cancer studies and eighteen articles61-78) of non-cancer studies (Fig. 1).

Fig. 1. Flowchart of selected process.

Study Description

The basic characteristics of the sixty-three included studies are shown in Tables 1 and 2. A total of 9,284,777 patients from the cancer study and 1,095,679 patients from the non-cancer study were included in the analysis. Among the cancer studies, twenty-eight16,19,20,22,24,27,28,31,33,35,36,39-50,54-58) used only hospital data to evaluate AI models or important predictors, whereas one study59) included both hospital and public data. Sixteen studies17,18,21,23,25,26,29,30,32,34,37,38,51-53,60) used public data for predicting COVID-19 infection among cancer studies. As predictive values, thirty-one studies16-46) predicted mortality using AI models, eight studies17,20,22,41,47-55) evaluated severity as the final outcome among cancer studies, and four studies predicted hospitalization.

Characteristics of cancer studies

Study name Source of data Number of patients Prediction values AI models Performance metrics
Aghakhani et al. (2023) Hospital data 44,112 Mortality DT, RF, GBM, XGBoost AUC, accuracy, sensitivity, specificity, F1 score, recall, precision
Ahamad et al. (2022) Public data 72,147 Severity, mortality, hospitalization RF, DT, XGBoost, GBM, SVM, GBM AUC, accuracy, F1 score, precision, recall
Upadhyay et al. (2021) Public data N/A Mortality NN N/A
Banoei et al. (2023) Hospital data 1,743 Mortality Bootstrap forest, Boosted tree, Neural boosted, Nominal logistic, lasso, svm, DT, KNN AUC, sensitivity, specificity
Carbonell et al. (2022) Hospital data 152 Mortality, severity Lasso AUC
An et al. (2020) Public data 10,237 Mortality LASSO, Linear SVM, RBF-SVM, RF, KNN AUC, accuracy, sensitivity, specificity
Gao et al. (2021) Hospital data 23,749 Mortality, severity LR, RF, NN, KNN, GBM, ensemble model (SVM, GBM, NN) AUC, accuracy, sensitivity, specificity, F1 score, PPV, NPV
Experton et al. (2021) Public data 1,030,893 Mortality, hospitalization RF AUC, accuracy
Heydar et al. (2022) Hospital data 505 Mortality RF AUC, accuracy, sensitivity, specificity
Heyl et al. (2022) Public data 215,831 Mortality RF, XGBoost, LR AUC, accuracy
Hilal et al. (2022) Public data 608,140 Mortality, hospitalization XGBoost AUC, accuracy, F1 score, recall, precision
Ikemura et al. (2021) Hospital data 4,313 Mortality GBM, XGBoost, GLM, RF, DL AUC, sensitivity, specificity
Jamshidi et al. (2021) Hospital data 797 Mortality RF, LR, GBM, SVM, NN AUC, sensitivity, specificity
Razjouyan et al. (2022) Public data 9,541 Mortality Lasso N/A
Edqvist et al. (2023) Public data 8,328,518 Mortality, hospitalization GBM, RF Accuracy
Karasneh et al. (2022) Hospital data 1,613 Mortality LR, RF, MARS, KNN, XGBoost, CART AUC
Lee et al. (2022) Public data 7,943 Mortality, hospitalization LR, RF AUC, precision
Modelli de Andrade et al. (2022) Hospital data 1,379 Mortality Lasso, XGBoost, Elastic Net AUC
Kivrak et al. (2021) Public data 1,603 Mortality XGBoost, RF, KNN, DL accuracy, sensitivity, specificity, precision
Rahman et al. (2021) Hospital data 250 Mortality self-developed model AUC, accuracy, sensitivity, specificity
Lorè et al. (2021) Hospital data 111 Mortality DT AUC
Rasmy et al. (2022) Public data CRWD: 247,960 OPTUM: 36,140 Mortality, mechanical ventilation, hospitalization LR, GBM, self-developed model AUC
Wollenstein-Betech et al. (2020) Public data 91,179 Mortality, hospitalization SVM, RF, XGBoost, LR AUC, accuracy, F1 score, precision, recall
Schmidt et al. (2021) Hospital data 4,643 Mortality XGBoost AUC
Alle et al. (2022) Hospital data 544 Mortality SVM, RF, XGBoost, LR AUC, F1 score, precision, recall
Nojiri et al. (2023) Hospital data 11,440 Mortality, severity XGBoost, Lasso AUC
Snider et al. (2021) Hospital data 127 Mortality, severity DT, RF, Lasso AUC, recall, precision
Subudhi et al. (2021) Hospital data 3,597 Mortality Boosting models, self-developed model N/A
Kar et al. (2021) Hospital data 2,370 Mortality XGBoost AUC, accuracy, sensitivity, specificity, F1 score, precision
Wu et al. (2021) Hospital data 2,144 Mortality DenseNet AUC, accuracy, sensitivity, specificity, F1 score, precision, recall
Guan et al. (2021) Hospital data 1,270 Mortality XGBoost, Lasso AUC,F1 score, precision, recall
Jung et al. (2022) Hospital data 1,076 Severity LR, XGBoost AUC, accuracy
Zhao et al. (2021) Hospital data 172 Severity LR, SVM AUC, accuracy, sensitivity, specificity
Jiao et al. (2021) Hospital data 2,309 Severity DL, self-developed model AUC, sensitivity, specificity, F1 score
Kang et al. (2021) Hospital data 151 Severity NN AUC, sensitivity, specificity, F1 score
Wong et al. (2021) Public data 502,524 Severity XGBoost AUC
Rojas-García et al. (2023) Public data 11,564 Severity SVM, RF, XGBoost, LR AUC, accuracy, sensitivity, specificity, F1 score, PPV, NPV
Burns et al. (2022) Public data 4,295 Severity LR, RF, SVM, XGBoost AUC, accuracy, specificity, F1 score, precision, recall, NPV
Wang et al. (2022) Hospital data 1,051 Severity self-developed model AUC, accuracy, sensitivity, specificity
Chen et al. (2021) Hospital data 362 Severity RF AUC, accuracy, sensitivity, specificity, F1 score
De Freitas et al. (2022) Hospital data 7,336 Hospitalization RF, XGBoost, GBM, Lasso AUC, accuracy, F1 score, precision
Jehi et al. (2020) Hospital data 4,536 Hospitalization Lasso AUC
Hao et al. (2020) Hospital data 2,566 Hospitalization, Mechanical ventilation SVM, RF, XGBoost, LR AUC, accuracy, F1 score, precision, recall
Aminu et al. (2022) Public data, hospital data 502 Mechanical ventilation LR, RF, SVM, GAM AUC, accuracy, sensitivity, specificity
Chen et al. (2021) Public data 6,485 Hospitalization Lasso, LR AUC, sensitivity, specificity

DenseNet: densely connected convolutional network; Lasso: least absolute shrinkage and selection operator; XGBoost: extreme gradient boosting; RF: random forest; LR: logistic regression; DT: decision tree; KNN: k-nearest neighbors; DL: deep learning model; SVM: support vector machine; NN: neural network; LGBM: light gradient boosting machine; GBM: gradient boosting model; GAM: generalized additive model; GLM: generalized linear model; MARS: multivariate adaptive regression splines; CART: classification and regression tree; AUC: area under the curve; AI: artificial intelligence; PPV: positive predictive value; NPV: negative predictive value.



Characteristics of non-cancer studies

Study name Source of data Number of patients Prediction values AI models Performance metrics
Churpek et al. (2021) Hospital data 5,075 Mortality XGBoost, RF, SVM, LR, neural net, self-developed model AUC, sensitivity, specificity, PPV, NPV
Elghamrawy et al. (2022) Public data 10,248 Mortality self-developed model AUC, accuracy, sensitivity, specificity, F1 score, FPR
Khadem et al. (2022) Hospital data 156 Mortality RF AUC, accuracy, sensitivity, specificity
Kablan et al. (2023) Hospital data 247 Mortality Ensemble model (GLM, NB, SDA, RF, PLS, KNN, SVM, MLP) AUC, accuracy, sensitivity, specificity, F1
Ovcharenko et al. (2023) Hospital data 350 Mortality CatBoost, RF, MLP, LGBM, ET, XGBoost, LR, DT, KNN AUC, sensitivity, specificity
Passarelli-Araujo et al. (2022) Public data 8,358 Mortality LR, SVM, RF, XGBoost AUC, accuracy, precision, recall
Pournazari et al. (2021) Hospital data 724 Mortality LR AUC
Pyrros et al. (2022) Public data 900 Mortality CNN, LR AUC
Yazadani et al. (2023) Hospital data 1,572 Mortality MLP, NB, KNN, DT, RF AUC, accuracy, precision, recall, F1 score
Wang et al. (2021) Hospital data 3,740 Mortality, mechanical ventilation XGBoost, LR, lasso, MLP, RNN, GRU, LSTM AUC, sensitivity, specificity
Woo et al. (2021) Hospital data 415 Mortality, severity LR, self-developed model AUC, sensitivity, specificity
Ageno et al. (2021) Hospital data 610 Severity Lasso, RF AUC, sensitivity, specificity, PPV, NPV
Carr et al. (2021) Hospital data 7,513 Severity Lasso, KNN AUC, sensitivity, specificity
Min et al. (2023) Hospital data 3,145 Severity CatBoost, CART AUC, accuracy, precision, recall, F1 score
Sun et al. (2020) Hospital data 336 Severity SVM AUC
Liprak et al. (2022) Hospital data 680 Hospitalization RF AUC
Nakamichi et al. (2021) Hospital data 190 Hospitalization AdaBoost, Extra Trees, Gradient boosting, RF AUC
Tariq et al. (2021) Hospital data 2,844 Hospitalization Fusion model (LR, RF, neural network, XGboost) AUC, precision, recall, F1 score

RF: random forest; RNN: recurrent neural network; XGBoost: extreme gradient boosting; LR: logistic regression; Lasso: least absolute shrinkage and selection operator; SDA: shrinkage discriminant analysis; SVM: support vector machine; GLM: generalized linear model; GRU: gated recurrent unit; NB: naive bayes; KNN: k-nearest neighbors; MLP: multi-layer perceptron; PLS: partial least squares; CART: classification and regression trees; CNN: convolutional neural network; ET: extra trees; LGBM: light gradient boosting machine; LSTM: long short-term memory; DT: decision tree; AUC: area under the curve; AI: artificial intelligence; PPV: positive predictive value; NPV: negative predictive value.



Among the eighteen studies61-78) that evaluated AI models in patients without cancer infected with COVID-19 (Table 2), fifteen studies61,63-65,67,69-78) used hospital data, and three studies62,66,68) used public data. As predictive values, eleven studies61- 71) predicted mortality, whereas five studies71-75) predicted the severity of COVID-19. Based on the diversity of patient data among cancer studies, 58.9 percent of patients had urinary diseases such as urinary tract infections, kidney stones, interstitial cystitis, kidney failure, urethritis, whereas only 0.03 percent of patients had gastrointestinal diseases as comorbidities. Furthermore, the highest proportion of patients had cardiovascular disease (37.02 percent) as a comorbidity among non-cancer studies; and psychological diseases were not identified among non-cancer studies (Fig. 2).

Fig. 2. Percentages of included patients based on types of comorbidities. CVD: cardiovascular disease; EDO: endocrine disease; RES: respiratory disease; GI: gastrointestinal disease; UI: urinary disease; PSY: psychological disease; CA: cancer; NEU: neurological disease; OT: others

Performance metrics of AI models in cancer and non-cancer studies

For cancer and non-cancer studies, the performance metrics of the AI models were demonstrated using AUC, accuracy, sensitivity, specificity, and F1 score (Tables 3 and 4). Among the forty-two studies16,17,19-28,30-42,44-60) providing performance metrics in cancer studies, forty studies16,17,20-28,31-33,35-60) provided AUC values with the AI model (Table 3). Eighteen non-cancer studies60-77) provided performance metrics of the AI models, including the AUC value (Table 4).

Summary of AI model performance metrics in cancer study*

Study name AI models Performance metrics

AUC Accuracy Sensitivity Specificity F1 score
Aghakhani et al. (2023) XGBoost 0.83 0.77 0.74 0.77 0.8
Ahamad et al. (2022) RF Medical data
1.00, 0.98a
AE data
1.00+
Medical data
1.00, 0.98a
AE data
1.00+
N/A N/A Medical data
1.00, 0.98a
AE data
1.00+
Banoei et al. (2023) BNN 0.85 N/A 0.57 0.94 N/A
Carbonell et al. (2022) Elastic Net 0.78, 0.82a N/A N/A N/A N/A
An et al. (2020) LASSO 0.83 0.86 0.94 0.90 N/A
Gao et al. (2021) Ensemble model 0.99 0.96 0.87 0.97 0.87
Experton et al. (2021) RF 0.71, 0.66b 0.65, 0.61b N/A N/A N/A
Heydar et al. (2022) RF DM
0.80
non-DM
0.84
DM
0.82
non-DM
0.80
DM
0.80
non-DM
0.91
DM
0.55
non-DM
0.56
N/A
Heyl et al. (2022) RF 0.90 0.83 N/A N/A N/A
Hilal et al. (2022) XGBoost Delta
0.78, 0.81b
Omicron
0.70, 0.78b
Delta
0.96, 0.85b
Omicron
0.98, 0.94b
N/A N/A Delta
0.27, 0.35b:
Omicron
0.27, 0.34b
Ikemura et al. (2021) GBM 0.80 N/A 0.919 0.735 N/A
Jamshidi et al. (2021) RF 0.79 N/A 0.70 0.75 N/A
Edqvist et al. (2023) GBM, RF N/A T1DM
RF: 0.88
T2DM
GBM: 0.74
N/A N/A N/A
Karasneh et al. (2022) LR 0.77 N/A N/A N/A N/A
Lee et al. (2022) LR 0.88 N/A N/A N/A N/A
Modelli de Andrade et al. (2022) Elastic Net 0.78 N/A N/A N/A N/A
Kivrak et al. (2021) XGBoost N/A 0.99 0.99 1.00 N/A
Rahman et al. (2021) self-developed model 0.95 0.90 0.80 0.92 N/A
Lorè et al. (2021) DT 0.73 N/A N/A N/A N/A
Rasmy et al. (2022) self- developed model 0.93, 0.92c N/A N/A N/A N/A
Wollenstein-Betech et al. (2020) LR 0.63, 0.74b 0.79, 0.71b N/A N/A 0.71, 0.70b
Schmidt et al. (2021) XGBoost 0.79 N/A N/A N/A N/A
Alle et al. (2022) LR 0.92 N/A N/A N/A 0.71
Nojiri et al. (2023) Lasso 0.80, 0.78a N/A N/A N/A N/A
Snider et al. (2021) DT 0.93, 0.96a N/A N/A N/A N/A
Kar et al. (2021) XGBoost 0.88 0.97 0.78 0.98 0.81
Wu et al. (2021) self- developed model 0.85 0.75 0.79 0.74 0.40
Guan et al. (2021) XGBoost 1.00 N/A N/A N/A 0.94
Jung et al. (2022) XGBoost 0.65 0.70 N/A N/A N/A
Zhao et al. (2021) SVM 0.94 0.91 0.90 0.94 N/A
Jiao et al. (2021) self- developed model 0•84 N/A 0.73 0.85 0.83
Kang et al. (2021) NN 0.95 N/A 1.00 0.85 0.96
Wong et al. (2021) XGBoost 0.81, 0.72a N/A N/A N/A N/A
Rojas-García et al. (2023) XGBoost 0.79 0.75 0.83 0.74 0.48
Burns et al. (2022) XGBoost 0.75 0.67 N/A 0.66 0.49
Wang et al. (2022) self- developed model 0.85 0.83 0.62 0.89 N/A
Chen et al. (2021) RF 0.90 0.94 0.99 0.93 0.97
De Freitas et al. (2022) RF 0.93 0.90 N/A N/A 0.94
Jehi et al. (2020) self- developed model 0.90 N/A N/A N/A N/A
Hao et al. (2020) RF 0.88b, 0.85c 0.88b, 0.86c N/A N/A 0.91b, 0.91c
Aminu et al. (2022) SVM, LR 1.00 0.99 1.00 0.98 N/A
Chen et al. (2021) LR 0.81 N/A 0.80 0.71 N/A

*all values of predicting mortality except for a: prediction value of severity, b: prediction value of hospitalization, and c: prediction value of mechanical ventilation; +: values including mortality and severity; Lasso: Least Absolute Shrinkage and Selection Operator; XGBoost: Extreme Gradient Boosting; RF: Random Forest; LR: Logistic Regression; DT: Decision Tree; SVM: Support Vector Machine; NN: Neural Network; GBM: Gradient Boosting Machine; AUC: area under the curve; AI: artificial intelligence



Summary of AI model performance metrics in non-cancer study*

Study name AI models Performance metrics

AUC Accuracy Sensitivity Specificity F1 score
Churpek et al. (2021) XGBoost, 0.81 N/A N/A N/A N/A
Elghamrawy et al. (2022) SVM 0.98 0.93 0.96 0.91 0.93
Khadem et al. (2022) RF 0.92 0.87 0.72 0.74 N/A
Kablan et al. (2023) GLM 0.87 0.74 1.00 0.43 0.65
Ovcharenko et al. (2023) CatBoost, 0.87 N/A 0.76 0.75 N/A
Passarelli-Araujo et al. (2022) XGBoost 0.90 0.81 N/A N/A N/A
Pournazari et al. (2021) LR 0.91 N/A N/A N/A N/A
Pyrros et al. (2022) CNN 0.84 N/A N/A N/A N/A
Yazadani et al. (2023) RF 0.98 0.93 N/A N/A 0.93
Wang et al. (2021) XGBoost, LR XGBoost: 0.92
LRb:
0.81
N/A XGBoost:
0.85
LRb: 0.83
XGBoost:
0.86
LRb: 0.70
N/A
Woo et al. (2021) self-developed model 0.81, 0.82a N/A N/A N/A N/A
Ageno et al. (2021) Lasso 0.76 N/A 0.93 0.34 N/A
Carr et al. (2021) LR 0.73 N/A 0.73 0.59 N/A
Min et al. (2023) CatBoost 0.82 0.73 N/A N/A N/A
Sun et al. (2020) SVM 0.97 N/A N/A N/A N/A
Liprak et al. (2022) RF 0.76 N/A N/A N/A N/A
Nakamichi et al. (2021) RF 0.93 N/A N/A N/A N/A
Tariq et al. (2021) Fusion model 0.91 N/A N/A N/A N/A

*all values of predicting mortality except for a and b, a: prediction value of severity, b: prediction value of mechanical ventilation; RF: random forest; XGBoost: extreme gradient boosting; LR: logistic regression; Lasso: least absolute shrinkage and selection operator; SVM: support vector machine; GLM: generalized linear model; CNN: convolutional neural network; AUC: area under the curve; AI: artificial intelligence. XGBoost: Extreme Gradient Boosting; RF: Random Forest



To predict mortality, the AUC values of AI models in cancer studies showed various levels compared with non-cancer studies (Fig. 3a). Majority of non-cancer studies (54.55 percent) showed AUC levels of AI models over 0.90, whereas 30.77 percent of cancer studies showed AUC values in the same range as that for predicting mortality. For predicting severity, compared to non-cancer studies, a larger proportion of cancer studies (20 percent vs 33.33 percent, respectively) provided AI models with AUC values between 0.90 and 1.00 of COVID-19 infection (Fig. 3b). For predicting hospitalization, 66.67 percent of studies showed the AUC value from 0.90 to 1.00 among non-cancer studies, while 28.57 percent of studies showed AUC level of AI models in the same range within cancer studies (Fig. 3c). For non-cancer studies, only one study provided an AUC level of AI model (AUC 0.80~0.89) (Fig. 3d) predicting hospitalization. For predicting mortality (3.85 percent), severity (8.33 percent), and hospitalization (14.29 percent), only cancer studies showed AUC values between 0.50 and 0.69. Additionally, based on the predicted values for mortality and severity, support vector machine (SVM) showed the highest AUC compared to other models such as random forest (RF) or extreme gradient boosting (XGboost) (Supplementary Fig. 1).

Fig. 3. Percentages of included studies based on AUC levels of AI models predicting outcomes (a) mortality in cancer studies (b) severity in cancer studies (c) mortality in non-cancer studies (d) severity in non-cancer studies.

Important predictors comparing datasets with cancer to without cancer infected with COVID-19

To predict the mortality and severity of COVID-19 in both cancer and non-cancer studies, age was ranked as the most important value compared to other predictors, such as psychological or neurological diseases (Supplementary Fig. 2). In cancer studies, cardiovascular disease was indicated as the most or second most important value for predicting severity, whereas in non-cancer studies, no study indicated cardiovascular disease as an important value (Supplementary Fig. 2). Furthermore, despite the inclusion of data from patients with cancer no studies have demonstrated cancer as an important predictor of severity.

Discussion

We conducted a systematic review to evaluate AI models that predict mortality, severity, hospitalization, mechanical ventilation, and other relevant predictors by comparing cancer and non-cancer studies. According to the current study, majority of non-cancer studies appear to exhibit AUC values ranging between 0.8 and 1, whereas cancer studies demonstrate more diverse AUC values including values of <0.65. Although a higher level of AUC represents better performance of AI models to distinguish between positive and negative scores, among cancer studies, the AUC values of one could promote the overfitting of data with a small sample for specific categories.79) Furthermore, the data imbalance of comorbidities shown in cancer studies might also contribute to the low levels and inconsistency of AUC values among cancer studies compared with non-cancer studies. Because the classification of included data can improve the outcome of AI models with inter- and intra-observer variability, the degree of data imbalance, defined as the ratio of the sample size of the minority class to that of the majority class, could also influence the model performance.80) Under- or overrepresentation of categories of included datasets, such as cancer studies in the current study, are potential sources of class imbalance among the patient data collected and the diversity of model performance.81) In particular, when including the data from patients with cancer, the uncertainty of cancer-specific risk factors, including balanced datasets in cancer studies for accurate prediction of outcomes such as mortality or severity of COVID-19 infection, could be more challenging than in non-cancer studies.81,82) According to Lara et al., low prevalence of certain conditions such as patients with cancer infected by COVID-19 with concurrent medical problems might hinder the collection of more representative data to provide a balanced dataset.83) Furthermore, mitigating the errors of overfitting caused by fewer datasets available for some categories, including uneven data of patients with cancer, could also affect the low metrics of AI model performance among some cancer studies.81,82) Considering the close association between high level of AUC and improved performance of AI models, AUC values consistently high in non-cancer studies from 0.80 to over 0.90 in the current study might reflect more balanced datasets used for improved prediction.84) The diversity of database constructions related to COVID-19 infections,83) especially data of patients with existing comorbidities, might cause unequal AI model performance, such as AUC values. Therefore, we still need more balanced datasets to provide consistent and improved model performance for predicting clinical outcomes of COVID-19 infections among cancer studies. Furthermore, we constructed a forest plot based on the AUC values obtained from the models applied to patients with and without cancer. In addition, based on the current investigation, RF and XGBoost were employed to predict COVID-19 infection among the included studies to predict mortality and severity among machine learning models, with SVM showing the highest AUC value. SVM exhibited a trend toward the highest AUC value. However, it is difficult to definitively conclude that SVM were the best performing models, as each study utilized data from different populations and the usage frequency of particular model varied.

Additionally, the importance of each predictive indicator in cancer and non-cancer studies was evaluated in the current study. Age was equally important for all predictive indicators among the included studies that predicted the clinical outcomes of COVID-19. A previous systematic review investigating the association between various predictive factors and the risk of mortality due to COVID-19 demonstrated findings similar to those of our study.86) The results revealed an increased susceptibility to COVID-19-related mortality with advancing age (OR: 2.61, 95 percent CI: 1.75-3.47; HR: 1.31, 95 percent CI: 1.11-1.51).86) Since the onset of COVID-19, older age has been recognized as a risk factor.87) In particular, patients with various comorbidities, including cancer, are exposed to various types of medications, thereby suppressing the immune system and may invoke vulnerability to COVID-19 infection.88) Age-related alterations in the immune system affect many aspects, leading to a decrease in pathogen immunity with increased age.89) Aging is associated with high morbidity and mortality due to various infections and a significant decrease in vaccine efficacy.89) In the recently announced COVID-19 and Cancer Consortium (CCC19) cohort,90) the median age of patients with cancer and COVID-19 was 66 years old, with 56 percent aged 65 years or older. The TERAVOLT cohort study on patients with thoracic malignancies and COVID-19 revealed a close association between age and increased risk of mortality (OR 1.88, 95 percent CI 1.0-3.6).91) However, the exact cause of this association is unclear, and further research is needed on its interaction with age in the context of COVID-19.

Cardiovascular disease (CVD) has also been demonstrated as an important factor across all predictive indicators in cancer studies, whereas in non-cancer cases, it has been shown as a significant predictor of mortality. Among other comorbidities, CVD has been an independent predictor of mortality.92) This suggests that CVD is an independent risk factor for viral acquisition with serious consequences; therefore, the cumulative risk may be higher in patients with CVD.93) Increased concerns and treatment of patients with various comorbidities such as cancer could increase the CVD burden with increasing blood pressure and relevant diseases.93) Momtazmanesh et al.94) also indicated that preexisting and newly developed CVDs are common in patients with COVID-19 and are associated with increased severity and mortality in these patients. A previous systematic review of the mortality and severity of COVID-19 also demonstrated that CVD was associated with an increased risk of deteriorated outcomes in patients with COVID-19.95) Therefore, CVDs play an important role in the outcome of patients with COVID-19, which require careful consideration and management in clinical practice.

Our study has several limitations. First, there is a possibility of overlooked studies due to the search methodology used. Specific keywords were employed to search for relevant articles. Although our search keywords provided effective results in achieving the study objectives, there is a risk that important materials did not emerge in our search queries. Second, the interpretation of our results should proceed with caution because the judgment criteria between severe and non-severe patients were not uniform. Third, we excluded deep learning when constructing the forest plots because no deep learning methods had two or more AUC values. Therefore, future work is required to collect and analyze more relevant resources, necessitating further studies on the presentation of predictor importance.

Conclusion

In conclusion, the current systematic review demonstrated diverse AUC values in cancer studies compared with non-cancer studies. Among cancer studies, under- and over-representation of data on comorbidities has been reported. Considering that the AUC values were influenced by the dataset balance, more data should be applied to develop or evaluate AI models predicting clinical outcomes such as mortality or severity of COVID-19 in patients with various comorbidities, as well as predictors.

Conflict of Interest

The authors have no conflicts of interest to declare with regards to the contents of this study.

References
  1. Chauhan S. Comprehensive review of coronavirus disease 2019 (covid-19). Biomed J. 2020;43(4):334-40.
    Pubmed KoreaMed CrossRef
  2. Silk BJ, Scobie HM, Duck WM, et al. Covid-19 surveillance after expiration of the public health emergency declaration - united states, may 11, 2023. MMWR Morb Mortal Wkly Rep. 2023;72(19):523-8.
    Pubmed KoreaMed CrossRef
  3. Prevention CfDC. Interim guidance on developing a covid-19 case investigation & contact tracing plan: Overview. (2023), 2023. Available from: https://www.cdc.gov/coronavirus/2019-ncov/index.html. Accessed 03 August, 2023.
  4. Al-Quteimat OM, Amer AM. The impact of the covid-19 pandemic on cancer patients. Am J Clin Oncol. 2020;43(6):452-5.
    Pubmed KoreaMed CrossRef
  5. Dai M, Liu D, Liu M, et al. Patients with cancer appear more vulnerable to sars-cov-2: a multicenter study during the covid-19 outbreak. Cancer Discov. 2020;10(6):783-91.
    Pubmed KoreaMed CrossRef
  6. Salunke AA, Nandy K, Pathak SK, et al. Impact of covid 19 in cancer patients on severity of disease and fatal outcomes: a systematic review and meta-analysis. Diabetes Metab Syndr. 2020;14(5):1431-7.
    Pubmed KoreaMed CrossRef
  7. Yousefi L, Saachi L, Bellazzi R, Chiovato L, Tucker A. Predicting comorbidities using resampling and dynamic bayesian networks with latent variables. IEEE Computer Society. 2017;30:205-6.
    CrossRef
  8. Silaghi-Dumitrescu R, Patrascu I, Lehene M, Bercea I. Comorbidities of covid-19 patients. Medicina (Kaunas). 2023;59(8):1393.
    Pubmed KoreaMed CrossRef
  9. Fok CC, Henry D, Allen J. Maybe small is too small a term: introduction to advancing small sample prevention science. Prev Sci. 2015;16(7):943-9.
    Pubmed KoreaMed CrossRef
  10. Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-43.
    Pubmed KoreaMed CrossRef
  11. Kim DK. Prediction models for covid-19 mortality using artificial intelligence. J Pers Med. 2022;12(9):1522.
    Pubmed KoreaMed CrossRef
  12. Abdulaal A, Patel A, Charani E, Denny S, Mughal N, Moore L. Prognostic modeling of covid-19 using artificial intelligence in the united kingdom: model development and validation. J Med Internet Res. 2020;22(8):e20259.
    Pubmed KoreaMed CrossRef
  13. Ngan Tran HC, Janet Jiang, Jay Bhuyan, Junhua Ding. Effect of class imbalance on the performance of machine learning-based network intrusion detection. Int J Performability Eng. 2021;17(9):741-55.
    CrossRef
  14. Cartmell KB, Bonilha HS, Simpson KN, Ford ME, Bryant DC, Alberg AJ. Patient barriers to cancer clinical trial participation and navigator activities to assist. Adv Cancer Res. 2020;146:139-66.
    Pubmed KoreaMed CrossRef
  15. Page MJ, McKenzie JE, Bossuyt PM, et al. The prisma 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.
    Pubmed KoreaMed CrossRef
  16. Aghakhani A, Shoshtarian Malak J, Karimi Z, Vosoughi F, Zeraati H, Yekaninejad MS. Predicting the covid-19 mortality among iranian patients using tree-based models: a cross-sectional study. Health Sci Rep. 2023;6(5):e1279.
    Pubmed KoreaMed CrossRef
  17. Ahamad MM, Aktar S, Uddin MJ, et al. Adverse effects of covid-19 vaccination: machine learning and statistical approach to identify and classify incidences of morbidity and postvaccination reactogenicity. Healthcare (Basel). 2022;11(1):31.
    Pubmed KoreaMed CrossRef
  18. Upadhyay AK, Shukla S. Correlation study to identify the factors affecting covid-19 case fatality rates in india. Diabetes Metab Syndr. 2021;15(3):993-9.
    Pubmed KoreaMed CrossRef
  19. Banoei MM, Rafiepoor H, Zendehdel K, et al. Unraveling complex relationships between covid-19 risk factors using machine learning based models for predicting mortality of hospitalized patients and identification of high-risk group: a large retrospective study. Front Med (Lausanne). 2023;10:1170331.
    Pubmed KoreaMed CrossRef
  20. Carbonell G, Del Valle DM, Gonzalez-Kozlova E, et al. Quantitative chest computed tomography combined with plasma cytokines predict outcomes in covid-19 patients. Heliyon. 2022;8(8):e10166.
    Pubmed KoreaMed CrossRef
  21. An C, Lim H, Kim DW, Chang JH, Choi YJ, Kim SW. Machine learning prediction for mortality of patients diagnosed with covid-19: a nationwide korean cohort study. Sci Rep. 2020;10(1):18716.
    Pubmed KoreaMed CrossRef
  22. Gao Y, Chen L, Chi J, et al. Development and validation of an online model to predict critical covid-19 with immune-inflammatory parameters. J Intensive Care. 2021;9(1):19.
    Pubmed KoreaMed CrossRef
  23. Experton B, Tetteh HA, Lurie N, et al. A predictive model for severe covid-19 in the medicare population: a tool for prioritizing primary and booster covid-19 vaccination. Biology (Basel). 2021;10(11):1185.
    Pubmed KoreaMed CrossRef
  24. Khadem H, Nemat H, Eissa MR, Elliott J, Benaissa M. Covid-19 mortality risk assessments for individuals with and without diabetes mellitus: machine learning models integrated with interpretation framework. Comput Biol Med. 2022;144:105361.
    Pubmed KoreaMed CrossRef
  25. Heyl J, Hardy F, Tucker K, et al. Frailty, comorbidity, and associations with in-hospital mortality in older covid-19 patients: exploratory study of administrative data. Interact J Med Res. 2022;11(2):e41520.
    Pubmed KoreaMed CrossRef
  26. Hilal W, Chislett MG, Snider B, McBean EA, Yawney J, Gadsden SA. Use of ai to assess covid-19 variant impacts on hospitalization, icu, and death. Front Artif Intell. 2022;5:927203.
    Pubmed KoreaMed CrossRef
  27. Ikemura K, Bellin E, Yagi Y, et al. Using automated machine learning to predict the mortality of patients with covid-19: prediction model development study. J Med Internet Res. 2021;23(2):e23458.
    Pubmed KoreaMed CrossRef
  28. Jamshidi E, Asgary A, Tavakoli N, et al. Using machine learning to predict mortality for covid-19 patients on day 0 in the icu. Front Digit Health. 2021;3:681608.
    Pubmed KoreaMed CrossRef
  29. Razjouyan J, Helmer DA, Lynch KE, et al. Smoking status and factors associated with covid-19 in-hospital mortality among us veterans. Nicotine Tob Res. 2022;24(5):785-93.
    Pubmed KoreaMed CrossRef
  30. Edqvist J, Lundberg C, Andreasson K, et al. Severe covid-19 infection in type 1 and type 2 diabetes during the first three waves in sweden. Diabetes Care. 2023;46(3):570-8.
    Pubmed KoreaMed CrossRef
  31. Karasneh RA, Khassawneh BY, Al-Azzam S, et al. Risk factors associated with mortality in covid-19 hospitalized patients: data from the middle east. Int J Clin Pract. 2022;2022:9617319.
    Pubmed KoreaMed CrossRef
  32. Lee BH, Lee KS, Kim HI, et al. Blood transfusion, all-cause mortality and hospitalization period in covid-19 patients: machine learning analysis of national health insurance claims data. Diagnostics (Basel). 2022;12(12):2970.
    Pubmed KoreaMed CrossRef
  33. de Sandes-Freitas TV, Requião-Moura LR, et al; Modelli de Andrade LG. Development and validation of a simple web-based tool for early prediction of covid-19-associated death in kidney transplant recipients. Am J Transplant. 2022;22(2):610-25.
    Pubmed KoreaMed CrossRef
  34. Kivrak M, Guldogan E, Colak C. Prediction of death status on the course of treatment in sars-cov-2 patients with deep learning and machine learning methods. Comput Methods Programs Biomed. 2021;201:105951.
    Pubmed KoreaMed CrossRef
  35. Rahman MM, Islam MM, Manik MMH, Islam MR, Al-Rakhami MS. Machine learning approaches for tackling novel coronavirus (covid-19) pandemic. SN Comput Sci. 2021;2(5):384.
    Pubmed KoreaMed CrossRef
  36. Lorè NI, De Lorenzo R, Rancoita PMV, et al. Cxcl10 levels at hospital admission predict covid-19 outcome: hierarchical assessment of 53 putative inflammatory biomarkers in an observational study. Mol Med. 2021;27(1):129.
    Pubmed KoreaMed CrossRef
  37. Rasmy L, Nigo M, Kannadath BS, et al. Recurrent neural network models (covrnn) for predicting outcomes of patients with covid-19 on admission to hospital: model development and validation using electronic health record data. Lancet Digit Health. 2022;4(6):e415-25.
    Pubmed KoreaMed CrossRef
  38. Wollenstein-Betech S, Cassandras CG, Paschalidis IC. Personalized predictive models for symptomatic covid-19 patients using basic preconditions: hospitalizations, mortality, and the need for an icu or ventilator. Int J Med Inform. 2020;142:104258.
    Pubmed KoreaMed CrossRef
  39. Schmidt M, Guidet B, Demoule A, et al. Predicting 90-day survival of patients with covid-19: survival of severely ill covid (sosic) scores. Ann Intensive Care. 2021;11(1):170.
    Pubmed KoreaMed CrossRef
  40. Alle S, Kanakan A, Siddiqui S, et al. Covid-19 risk stratification and mortality prediction in hospitalized indian patients: harnessing clinical data for public health benefits. PLoS One. 2022;17(3):e0264785.
    Pubmed KoreaMed CrossRef
  41. Nojiri S, Irie Y, Kanamori R, Naito T, Nishizaki Y. Mortality prediction of covid-19 in hospitalized patients using the 2020 diagnosis procedure combination administrative database of japan. Intern Med. 2023;62(2):201-13.
    Pubmed KoreaMed CrossRef
  42. Snider JM, You JK, Wang X, et al. Group iia secreted phospholipase a2 is associated with the pathobiology leading to covid-19 mortality. J Clin Invest. 2021;131(19):e149236.
    Pubmed KoreaMed CrossRef
  43. Subudhi S, Verma A, Patel AB, et al. Comparing machine learning algorithms for predicting icu admission and mortality in covid-19. NPJ Digit Med. 2021;4(1):87.
    Pubmed KoreaMed CrossRef
  44. Kar S, Chawla R, Haranath SP, et al. Multivariable mortality risk prediction using machine learning for covid-19 patients at admission (aicovid). Sci Rep. 2021;11(1):12801.
    Pubmed KoreaMed CrossRef
  45. Wu JT, de la Hoz MÁ A, Kuo PC, et al. Developing and validating multi-modal models for mortality prediction in covid-19 patients: a multi-center retrospective study. J Digit Imaging. 2022;35(6):1514-29.
    Pubmed KoreaMed CrossRef
  46. Guan X, Zhang B, Fu M, et al. Clinical and inflammatory features based machine learning model for fatal risk prediction of hospitalized covid-19 patients: results from a retrospective cohort study. Ann Med. 2021;53(1):257-66.
    Pubmed KoreaMed CrossRef
  47. Jung C, Excoffier JB, Raphaël-Rousseau M, Salaün-Penquer N, Ortala M, Chouaid C. Evolution of hospitalized patient characteristics through the first three covid-19 waves in paris area using machine learning analysis. PLoS One. 2022;17(2):e0263266.
    Pubmed KoreaMed CrossRef
  48. Zhao C, Bai Y, Wang C, et al. Risk factors related to the severity of covid-19 in wuhan. Int J Med Sci. 2021;18(1):120-7.
    Pubmed KoreaMed CrossRef
  49. Jiao Z, Choi JW, Halsey K, et al. Prognostication of patients with covid-19 using artificial intelligence based on chest x-rays and clinical data: a retrospective study. Lancet Digit Health. 2021;3(5):e286-94.
    Pubmed KoreaMed CrossRef
  50. Kang J, Chen T, Luo H, Luo Y, Du G, Jiming-Yang M. Machine learning predictive model for severe covid-19. Infect Genet Evol. 2021;90:104737.
    Pubmed KoreaMed CrossRef
  51. Wong KC, Xiang Y, Yin L, So HC. Uncovering clinical risk factors and predicting severe covid-19 cases using uk biobank data: machine learning approach. JMIR Public Health Surveill. 2021;7(9):e29544.
    Pubmed KoreaMed CrossRef
  52. Rojas-García M, Vázquez B, Torres-Poveda K, Madrid-Marina V. Lethality risk markers by sex and age-group for covid-19 in mexico: a cross-sectional study based on machine learning approach. BMC Infect Dis. 2023;23(1):18.
    Pubmed KoreaMed CrossRef
  53. Burns SM, Woodworth TS, Icten Z, Honda T, Manjourides J. A machine learning approach to identify predictors of severe covid-19 outcome in patients with rheumatoid arthritis. Pain Physician. 2022;25(8):593-602.
    Pubmed
  54. Wang R, Jiao Z, Yang L, et al. Artificial intelligence for prediction of covid-19 progression using ct imaging and clinical data. Eur Radiol. 2022;32(1):205-12.
    Pubmed KoreaMed CrossRef
  55. Chen Y, Ouyang L, Bao FS, et al. A multimodality machine learning approach to differentiate severe and nonsevere covid-19: model development and validation. J Med Internet Res. 2021;23(4):e23948.
    Pubmed KoreaMed CrossRef
  56. De Freitas VM, Chiloff DM, Bosso GG, et al. A machine learning model for predicting hospitalization in patients with respiratory symptoms during the covid-19 pandemic. J Clin Med. 2022;11(15):4574.
    Pubmed KoreaMed CrossRef
  57. Jehi L, Ji X, Milinovich A, et al. Development and validation of a model for individualized prediction of hospitalization risk in 4,536 patients with covid-19. PLoS One. 2020;15(8):e0237419.
    Pubmed KoreaMed CrossRef
  58. Hao B, Sotudian S, Wang T, et al. Early prediction of level-of-care requirements in patients with covid-19. Elife. 2020;9:e60519.
    Pubmed KoreaMed CrossRef
  59. Aminu M, Yadav D, Hong L, et al. Habitat imaging biomarkers for diagnosis and prognosis in cancer patients infected with covid-19. Cancers (Basel). 2022;15(1):275.
    Pubmed KoreaMed CrossRef
  60. Chen Z, Russo NW, Miller MM, Murphy RX, Burmeister DB. An observational study to develop a scoring system and model to detect risk of hospital admission due to covid-19. J Am Coll Emerg Physicians Open. 2021;2(2):e12406.
    Pubmed KoreaMed CrossRef
  61. Churpek MM, Gupta S, Spicer AB, et al. Machine learning prediction of death in critically ill patients with coronavirus disease 2019. Crit Care Explor. 2021;3(8):e0515.
    Pubmed KoreaMed CrossRef
  62. Elghamrawy SM, Hassanien AE, Vasilakos AV. Genetic-based adaptive momentum estimation for predicting mortality risk factors for covid-19 patients using deep learning. Int J Imaging Syst Technol. 2022;32(2):614-28.
    Pubmed KoreaMed CrossRef
  63. Khadem H, Nemat H, Elliott J, Benaissa M. Interpretable machine learning for inpatient covid-19 mortality risk assessments: diabetes mellitus exclusive interplay. Sensors (Basel). 2022;22(22):8757.
    Pubmed KoreaMed CrossRef
  64. Kablan R, Miller HA, Suliman S, Frieboes HB. Evaluation of stacked ensemble model performance to predict clinical outcomes: a covid-19 study. Int J Med Inform. 2023;175:105090.
    Pubmed KoreaMed CrossRef
  65. Ovcharenko E, Kutikhin A, Gruzdeva O, et al. Cardiovascular and renal comorbidities included into neural networks predict the outcome in covid-19 patients admitted to an intensive care unit: three-center, cross-validation, age- and sex-matched study. J Cardiovasc Dev Dis. 2023;10(2):39.
    Pubmed KoreaMed CrossRef
  66. Passarelli-Araujo H, Passarelli-Araujo H, Urbano MR, Pescim RR. Machine learning and comorbidity network analysis for hospitalized patients with covid-19 in a city in southern brazil. Smart Health (Amst). 2022;26:100323.
    Pubmed KoreaMed CrossRef
  67. Pournazari P, Spangler AL, Ameer F, et al. Cardiac involvement in hospitalized patients with covid-19 and its incremental value in outcomes prediction. Sci Rep. 2021;11(1):19450.
    Pubmed KoreaMed CrossRef
  68. Pyrros A, Rodriguez Fernandez J, Borstelmann SM, et al. Validation of a deep learning, value-based care model to predict mortality and comorbidities from chest radiographs in covid-19. PLOS Digit Health. 2022;1(8):e0000057.
    Pubmed KoreaMed CrossRef
  69. Yazdani A, Bigdeli SK, Zahmatkeshan M. Investigating the performance of machine learning algorithms in predicting the survival of covid-19 patients: a cross section study of iran. Health Sci Rep. 2023;6(4):e1212.
    Pubmed KoreaMed CrossRef
  70. Wang JM, Liu W, Chen X, McRae MP, McDevitt JT, Fenyö D. Predictive modeling of morbidity and mortality in patients hospitalized with covid-19 and its clinical implications: algorithm development and interpretation. J Med Internet Res. 2021;23(7):e29514.
    Pubmed KoreaMed CrossRef
  71. Woo SH, Rios-Diaz AJ, Kubey AA, et al. Development and validation of a web-based severe covid-19 risk prediction model. Am J Med Sci. 2021;362(4):355-62.
    Pubmed KoreaMed CrossRef
  72. Ageno W, Cogliati C, Perego M, et al. Clinical risk scores for the early prediction of severe outcomes in patients hospitalized for covid-19. Intern Emerg Med. 2021;16(4):989-96.
    Pubmed KoreaMed CrossRef
  73. Carr E, Bendayan R, Bean D, et al. Evaluation and improvement of the national early warning score (news2) for covid-19: a multi-hospital study. BMC Med. 2021;19(1):23.
    Pubmed KoreaMed CrossRef
  74. Min K, Cheng Z, Liu J, et al. Early-stage predictors of deterioration among 3145 nonsevere sars-cov-2-infected people community-isolated in wuhan, china: a combination of machine learning algorithms and competing risk survival analyses. J Evid Based Med. 2023;16(2):166-77.
    Pubmed CrossRef
  75. Sun L, Song F, Shi N, et al. Combination of four clinical indicators predicts the severe/critical symptom of patients infected covid-19. J Clin Virol. 2020;128:104431.
    Pubmed KoreaMed CrossRef
  76. Lipták P, Banovcin P, Rosoľanka R, et al. A machine learning approach for identification of gastrointestinal predictors for the risk of covid-19 related hospitalization. PeerJ. 2022;10:e13124.
    Pubmed KoreaMed CrossRef
  77. Nakamichi K, Shen JZ, Lee CS, et al. Hospitalization and mortality associated with sars-cov-2 viral clades in covid-19. Sci Rep. 2021;11(1):4802.
    Pubmed KoreaMed CrossRef
  78. Tariq A, Celi LA, Newsome JM, et al. Patient-specific covid-19 resource utilization prediction using fusion ai model. NPJ Digit Med. 2021;4(1):94.
    Pubmed KoreaMed CrossRef
  79. Shakibfar S, Nyberg F, Li H, et al. Artificial intelligence-driven prediction of covid-19-related hospitalization and death: a systematic review. Front Public Health. 2023;11:1183725.
    Pubmed KoreaMed CrossRef
  80. Giang Hoang N, Abdesselam B, Son Lam P. In: Peng-Yeng Y, ed. Pattern recognition. Ch. 10. Rijeka: IntechOpen, 2009:193-208.
    CrossRef
  81. Tasci E, Zhuge Y, Camphausen K, Krauze AV. Bias and class imbalance in oncologic data-towards inclusive and transferrable ai in large scale oncology data sets. Cancers (Basel). 2022;14(12):2897.
    Pubmed KoreaMed CrossRef
  82. Navlakha S, Morjaria S, Perez-Johnston R, Zhang A, Taur Y. Projecting covid-19 disease severity in cancer patients using purposefully-designed machine learning. BMC Infect Dis. 2021;21(1):391.
    Pubmed KoreaMed CrossRef
  83. Ricci Lara MA, Echeveste R, Ferrante E. Addressing fairness in artificial intelligence for medical imaging. Nature Communications. 2022;13(1):4581.
    Pubmed KoreaMed CrossRef
  84. Mandrekar JN. Receiver operating characteristic curve in diagnostic test assessment. J Thorac Oncol. 2010;5(9):1315-6.
    Pubmed CrossRef
  85. Huang S, Cai N, Pacheco PP, Narrandes S, Wang Y, Xu W. Applications of support vector machine (svm) learning in cancer genomics. Cancer Genomics Proteomics. 2018;15(1):41-51.
    Pubmed KoreaMed CrossRef
  86. Dessie ZG, Zewotir T. Mortality-related risk factors of covid-19: a systematic review and meta-analysis of 42 studies and 423,117 patients. BMC Infect Dis. 2021;21(1):855.
    Pubmed KoreaMed CrossRef
  87. Wu Z, McGoogan JM. Characteristics of and important lessons from the coronavirus disease 2019 (covid-19) outbreak in china: summary of a report of 72 314 cases from the chinese center for disease control and prevention. JAMA. 2020;323(13):1239-42.
    Pubmed CrossRef
  88. Cazeau N, Palazzo M, Savani M, Shroff RT. Covid-19 vaccines and immunosuppressed patients with cancer: critical considerations. Clin J Oncol Nurs. 2022;26(4):367-73.
    Pubmed KoreaMed CrossRef
  89. Bartleson JM, Radenkovic D, Covarrubias AJ, Furman D, Winer DA, Verdin E. Sars-cov-2, covid-19 and the aging immune system. Nature Aging. 2021;1(9):769-82.
    CrossRef
  90. Kuderer NM, Choueiri TK, Shah DP, et al. Clinical impact of covid-19 on patients with cancer (ccc19): a cohort study. Lancet. 2020;395(10241):1907-18.
    Pubmed KoreaMed CrossRef
  91. Garassino MC, Whisenant JG, Huang LC, et al. Covid-19 in patients with thoracic malignancies (teravolt): first results of an international, registry-based, cohort study. Lancet Oncol. 2020;21(7):914-22.
    Pubmed KoreaMed CrossRef
  92. Tehrani D, Wang X, Rafique AM, et al. Impact of cancer and cardiovascular disease on in-hospital outcomes of covid-19 patients: results from the american heart association covid-19 cardiovascular disease registry. Cardiooncology. 2021;7(1):28.
    Pubmed KoreaMed CrossRef
  93. Asokan I, Rabadia SV, Yang EH. The covid-19 pandemic and its impact on the cardio-oncology population. Curr Oncol Rep. 2020;22(6):60.
    Pubmed KoreaMed CrossRef
  94. Momtazmanesh S, Shobeiri P, Hanaei S, Mahmoud-Elsayed H, Dalvi B, Malakan Rad E. Cardiovascular disease in covid-19: a systematic review and meta-analysis of 10,898 patients and proposal of a triage risk stratification tool. The Egyptian Heart Journal. 2020;72(1):41.
    Pubmed KoreaMed CrossRef
  95. Kazemi E, Soldoozi Nejat R, Ashkan F, Sheibani H. The laboratory findings and different covid-19 severities: a systematic review and meta-analysis. Annals of Clinical Microbiology and Antimicrobials. 2021;20(1):17.
    Pubmed KoreaMed CrossRef


September 2024, 34 (3)
Full Text(PDF) Free

Social Network Service
Services

Cited By Articles
  • CrossRef (0)