Machine Learning (ML) models play an important role in healthcare thanks to their remarkable performance in predicting complex phenomena. During the COVID-19 pandemic, different ML models were implemented to support decisions in the medical settings. However, clinical experts need to ensure that these models are valid, provide clinically useful information, and are implemented and used correctly. In this vein, they need to understand the logic behind the models to be able to trust them. Hence, developing transparent and interpretable models has increasing relevance. In this work, we applied four interpretable ML models including logistic regression, decision tree, pyFUME, and RIPPER to classify suspected COVID-19 patients based on clinical data collected from blood samples. After preprocessing the data set and training the models, we evaluate the models based on their predictive performance. Then, we illustrate that interpretability can be achieved in different ways. First, SHAP explanations are built from logistic regression and decision trees to obtain the features' importance. Then, the potential of pyFUME and RIPPER in providing inherent interpretability are reflected. Finally, potential ways to achieve trust in future studies are briefly discussed.

Comparing Interpretable AI Approaches for the Clinical Environment: an Application to COVID-19

Nobile, MS;
2022-01-01

Abstract

Machine Learning (ML) models play an important role in healthcare thanks to their remarkable performance in predicting complex phenomena. During the COVID-19 pandemic, different ML models were implemented to support decisions in the medical settings. However, clinical experts need to ensure that these models are valid, provide clinically useful information, and are implemented and used correctly. In this vein, they need to understand the logic behind the models to be able to trust them. Hence, developing transparent and interpretable models has increasing relevance. In this work, we applied four interpretable ML models including logistic regression, decision tree, pyFUME, and RIPPER to classify suspected COVID-19 patients based on clinical data collected from blood samples. After preprocessing the data set and training the models, we evaluate the models based on their predictive performance. Then, we illustrate that interpretability can be achieved in different ways. First, SHAP explanations are built from logistic regression and decision trees to obtain the features' importance. Then, the potential of pyFUME and RIPPER in providing inherent interpretability are reflected. Finally, potential ways to achieve trust in future studies are briefly discussed.
2022
2022 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB)
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5007341
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 0
social impact