Development and Validation of an Artificial Intelligence System to Optimize Clinician Review of Patient Records
Autor: | Sidhartha R. Sinha, Pranav Rajpurkar, Cheuk To Tsui, Chiraag Kulkarni, Yan Jiang, Andrew Y. Ng, Karolin Jarr, Ethan A. Chi, Michael Zhang, Jin Long, Gordon Chi |
---|---|
Rok vydání: | 2021 |
Předmět: |
Adult
Male Time Factors Artificial Intelligence System Referral MEDLINE Information Storage and Retrieval Workload Job Satisfaction Medical Records Patient referral Artificial Intelligence Physicians Task Performance and Analysis Humans Medicine Relevance (information retrieval) Prospective Studies Prospective cohort study Referral and Consultation Academic Medical Centers business.industry Medical record General Medicine Middle Aged medicine.disease Data extraction Female Medical emergency business User-Centered Design |
Zdroj: | JAMA Network Open. 4:e2117391 |
ISSN: | 2574-3805 |
DOI: | 10.1001/jamanetworkopen.2021.17391 |
Popis: | Importance Physicians are required to work with rapidly growing amounts of medical data. Approximately 62% of time per patient is devoted to reviewing electronic health records (EHRs), with clinical data review being the most time-consuming portion. Objective To determine whether an artificial intelligence (AI) system developed to organize and display new patient referral records would improve a clinician's ability to extract patient information compared with the current standard of care. Design, setting, and participants In this prognostic study, an AI system was created to organize patient records and improve data retrieval. To evaluate the system on time and accuracy, a nonblinded, prospective study was conducted at a single academic medical center. Recruitment emails were sent to all physicians in the gastroenterology division, and 12 clinicians agreed to participate. Each of the clinicians participating in the study received 2 referral records: 1 AI-optimized patient record and 1 standard (non-AI-optimized) patient record. For each record, clinicians were asked 22 questions requiring them to search the assigned record for clinically relevant information. Clinicians reviewed records from June 1 to August 30, 2020. Main outcomes and measures The time required to answer each question, along with accuracy, was measured for both records, with and without AI optimization. Participants were asked to assess overall satisfaction with the AI system, their preferred review method (AI-optimized vs standard), and other topics to assess clinical utility. Results Twelve gastroenterology physicians/fellows completed the study. Compared with standard (non-AI-optimized) patient record review, the AI system saved first-time physician users 18% of the time used to answer the clinical questions (10.5 [95% CI, 8.5-12.6] vs 12.8 [95% CI, 9.4-16.2] minutes; P = .02). There was no significant decrease in accuracy when physicians retrieved important patient information (83.7% [95% CI, 79.3%-88.2%] with the AI-optimized vs 86.0% [95% CI, 81.8%-90.2%] without the AI-optimized record; P = .81). Survey responses from physicians were generally positive across all questions. Eleven of 12 physicians (92%) preferred the AI-optimized record review to standard review. Despite a learning curve pointed out by respondents, 11 of 12 physicians believed that the technology would save them time to assess new patient records and were interested in using this technology in their clinic. Conclusions and relevance In this prognostic study, an AI system helped physicians extract relevant patient information in a shorter time while maintaining high accuracy. This finding is particularly germane to the ever-increasing amounts of medical data and increased stressors on clinicians. Increased user familiarity with the AI system, along with further enhancements in the system itself, hold promise to further improve physician data extraction from large quantities of patient health records. |
Databáze: | OpenAIRE |
Externí odkaz: |