What Kind of Transformer Models to Use for the ICD-10 Codes Classification Task.

Autor: Mansour M; Bern University of Applied Sciences, Switzerland., Yilmaz F; Bern University of Applied Sciences, Switzerland., Miletic M; Bern University of Applied Sciences, Switzerland., Sariyar M; Bern University of Applied Sciences, Switzerland.
Jazyk: angličtina
Zdroj: Studies in health technology and informatics [Stud Health Technol Inform] 2024 Aug 22; Vol. 316, pp. 1008-1012.
DOI: 10.3233/SHTI240580
Abstrakt: Coding according to the International Classification of Diseases (ICD)-10 and its clinical modifications (CM) is inherently complex and expensive. Natural Language Processing (NLP) assists by simplifying the analysis of unstructured data from electronic health records, thereby facilitating diagnosis coding. This study investigates the suitability of transformer models for ICD-10 classification, considering both encoder and encoder-decoder architectures. The analysis is performed on clinical discharge summaries from the Medical Information Mart for Intensive Care (MIMIC)-IV dataset, which contains an extensive collection of electronic health records. Pre-trained models such as BioBERT, ClinicalBERT, ClinicalLongformer, and ClinicalBigBird are adapted for the coding task, incorporating specific preprocessing techniques to enhance performance. The findings indicate that increasing context length improves accuracy, and that the difference in accuracy between encoder and encoder-decoder models is negligible.
Databáze: MEDLINE