Popis: |
CONTEXT: Documented goals-of-care discussions are an important quality metric for patients with serious illness. Natural language processing (NLP) is a promising approach for identifying goals-of-care discussions in the electronic health record (EHR). OBJECTIVES: To compare three NLP modeling approaches for identifying EHR documentation of goals-of-care discussions and generate hypotheses about differences in performance. METHODS: We conducted a mixed-methods study to evaluate performance and misclassification for three NLP featurization approaches modeled with regularized logistic regression: bag-of-words (BOW), rule-based, and a hybrid approach. From a prospective cohort of 150 patients hospitalized with serious illness over 2018–2020, we collected 4,391 inpatient EHR notes; 99 (2.3%) contained documented goals-of-care discussions. We used leave-one-out cross-validation to estimate performance by comparing pooled NLP predictions to human abstractors with receiver-operating-characteristic (ROC) and precision-recall (PR) analyses. We qualitatively examined a purposive sample of 70 NLP-misclassified notes using content analysis to identify linguistic features that allowed us to generate hypotheses underpinning misclassification. RESULTS: All three modeling approaches discriminated between notes with and without goals-of-care discussions (AUC(ROC): BOW, 0.907; rule-based, 0.948; hybrid, 0.965). Precision and recall were only moderate (precision at 70% recall: BOW, 16.2%; rule-based, 50.4%; hybrid, 49.3%; AUC(PR): BOW, 0.505; rule-based, 0.579; hybrid, 0.599). Qualitative analysis revealed patterns underlying performance differences between BOW and rule-based approaches. CONCLUSION: NLP holds promise for identifying EHR-documented goals-of-care discussions. However, the rarity of goals-of-care content in EHR data limits performance. Our findings highlight opportunities to optimize NLP modeling approaches, and support further exploration of different NLP approaches to identify goals-of-care discussions. |