Automatic speech recognition performance for digital scribes: a performance comparison between general-purpose and specialized models tuned for patient-clinician conversations.
Autor: | Tran BD; University of California Irvine, Irvine, CA, USA., Mangu R; University of California Irvine, Irvine, CA, USA., Tai-Seale M; University of California San Diego, La Jolla, USA., Lafata JE; University of North Carolina at Chapel Hill, Chapel Hill, USA.; Henry Ford Health System, Detroit, USA., Zheng K; University of California Irvine, Irvine, CA, USA. |
---|---|
Jazyk: | angličtina |
Zdroj: | AMIA ... Annual Symposium proceedings. AMIA Symposium [AMIA Annu Symp Proc] 2023 Apr 29; Vol. 2022, pp. 1072-1080. Date of Electronic Publication: 2023 Apr 29 (Print Publication: 2022). |
Abstrakt: | One promising solution to address physician data entry needs is through the development of so-called "digital scribes," or tools which aim to automate clinical documentation via automatic speech recognition (ASR) of patient-clinician conversations. Evaluation of specialized ASR models in this domain, useful for understanding feasibility and development opportunities, has been difficult because most models have been under development. Following the commercial release of such models, we report an independent evaluation of four models, two general-purpose, and two for medical conversation with a corpus of 36 primary care conversations. We identify word error rates (WER) of 8.8%-10.5% and word-level diarization error rates (WDER) ranging from 1.8%-13.9%, which are generally lower than previous reports. The findings indicate that, while there is room for improvement, the performance of these specialized models, at least under ideal recording conditions, may be amenable to the development of downstream applications which rely on ASR of patient-clinician conversations. (©2022 AMIA - All rights reserved.) |
Databáze: | MEDLINE |
Externí odkaz: |