Topic Modeling for Interpretable Text Classification From EHRs

Autor: Rijcken, Emil, Kaymak, Uzay, Scheepers, F.E., Mosteiro Romero, Pablo, Zervanou, Kalliopi, Spruit, Marco, Sub Natural Language Processing, Natural Language Processing
Přispěvatelé: Sub Natural Language Processing, Natural Language Processing, JADS Research, JADS Den Bosch (TU/e), EAISI Health, EAISI Foundational
Jazyk: angličtina
Rok vydání: 2022
Předmět:
Zdroj: Frontiers in Big Data, 5, 1. Frontiers Media SA
Frontiers in Big Data, 5:846930. Frontiers Media S.A.
ISSN: 2624-909X
DOI: 10.3389/fdata.2022.846930
Popis: The clinical notes in electronic health records have many possibilities for predictive tasks in text classification. The interpretability of these classification models for the clinical domain is critical for decision making. Using topic models for text classification of electronic health records for a predictive task allows for the use of topics as features, thus making the text classification more interpretable. However, selecting the most effective topic model is not trivial. In this work, we propose considerations for selecting a suitable topic model based on the predictive performance and interpretability measure for text classification. We compare 17 different topic models in terms of both interpretability and predictive performance in an inpatient violence prediction task using clinical notes. We find no correlation between interpretability and predictive performance. In addition, our results show that although no model outperforms the other models on both variables, our proposed fuzzy topic modeling algorithm (FLSA-W) performs best in most settings for interpretability, whereas two state-of-the-art methods (ProdLDA and LSI) achieve the best predictive performance.
Databáze: OpenAIRE