CLIP: A Dataset for Extracting Action Items for Physicians from Hospital Discharge Notes
Autor: | Hui Dai, Yada Pruksachatkun, Sean Adler, James Mullenbach, Yi Yang, David Sontag, Jennifer Seale, Jordan Swartz, T. Greg McKelvey |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer Science - Computation and Language Information retrieval Exploit Computer science Information sharing Machine Learning (stat.ML) Context (language use) Automatic summarization Machine Learning (cs.LG) Task (project management) Action (philosophy) Statistics - Machine Learning Hospital discharge Language model Computation and Language (cs.CL) |
Zdroj: | ACL/IJCNLP (1) |
Popis: | Continuity of care is crucial to ensuring positive health outcomes for patients discharged from an inpatient hospital setting, and improved information sharing can help. To share information, caregivers write discharge notes containing action items to share with patients and their future caregivers, but these action items are easily lost due to the lengthiness of the documents. In this work, we describe our creation of a dataset of clinical action items annotated over MIMIC-III, the largest publicly available dataset of real clinical notes. This dataset, which we call CLIP, is annotated by physicians and covers 718 documents representing 100K sentences. We describe the task of extracting the action items from these documents as multi-aspect extractive summarization, with each aspect representing a type of action to be taken. We evaluate several machine learning models on this task, and show that the best models exploit in-domain language model pre-training on 59K unannotated documents, and incorporate context from neighboring sentences. We also propose an approach to pre-training data selection that allows us to explore the trade-off between size and domain-specificity of pre-training datasets for this task. Comment: ACL 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |