Resumen
Accurate mortality prediction allows Intensive Care Units (ICUs) to adequately benchmark clinical practice and identify patients with unexpected outcomes. Traditionally, simple statistical models have been used to assess patient death risk, many times with sub-optimal performance. On the other hand Deep Learning holds promise to positively impact clinical practice by leveraging medical data to assist diagnosis and prediction, including mortality prediction. However, as the question of whether powerful Deep Learning models attend correlations backed by sound medical knowledge when generating predictions remains open, additional interpretability tools are needed to foster trust and encourage the use of AI by clinicians. In this work we show an interpretable Deep Learning model trained on MIMIC-III to predict mortality inside the ICU using raw nursing notes, together with visual explanations for word importance based on the Shapley Value. Our model reaches a ROC of 0.8629 (±0.0058), outperforming the traditional SAPS-II score and a LSTM recurrent neural network baseline while providing enhanced interpretability when compared with similar Deep Learning approaches. Supporting code can be found at https://github.com/williamcaicedo/ISeeU2. © 2022 Elsevier Ltd