Natural language inference for curation of structured clinical registries from unstructured text.
Autor: | Percha B; Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA.; Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, New York, USA., Pisapati K; Mount Sinai Innovation Partners, Mount Sinai Health System, New York, New York, USA.; Breast Surgical Oncology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.; Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA., Gao C; Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA., Schmidt H; Breast Surgical Oncology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.; Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA. |
---|---|
Jazyk: | angličtina |
Zdroj: | Journal of the American Medical Informatics Association : JAMIA [J Am Med Inform Assoc] 2021 Dec 28; Vol. 29 (1), pp. 97-108. |
DOI: | 10.1093/jamia/ocab243 |
Abstrakt: | Objective: Clinical registries-structured databases of demographic, diagnosis, and treatment information-play vital roles in retrospective studies, operational planning, and assessment of patient eligibility for research, including clinical trials. Registry curation, a manual and time-intensive process, is always costly and often impossible for rare or underfunded diseases. Our goal was to evaluate the feasibility of natural language inference (NLI) as a scalable solution for registry curation. Materials and Methods: We applied five state-of-the-art, pretrained, deep learning-based NLI models to clinical, laboratory, and pathology notes to infer information about 43 different breast oncology registry fields. Model inferences were evaluated against a manually curated, 7439 patient breast oncology research database. Results: NLI models showed considerable variation in performance, both within and across fields. One model, ALBERT, outperformed the others (BART, RoBERTa, XLNet, and ELECTRA) on 22 out of 43 fields. A detailed error analysis revealed that incorrect inferences primarily arose through models' tendency to misinterpret historical findings, as well as confusion based on abbreviations and subtle term variants common in clinical text. Discussion and Conclusion: Traditional natural language processing methods require specially annotated training sets or the construction of a separate model for each registry field. In contrast, a single pretrained NLI model can curate dozens of different fields simultaneously. Surprisingly, NLI methods remain unexplored in the clinical domain outside the realm of shared tasks and benchmarks. Modern NLI models could increase the efficiency of registry curation, even when applied "out of the box" with no additional training. (© The Author(s) 2021. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com.) |
Databáze: | MEDLINE |
Externí odkaz: |