Quantifying and Improving the Performance of Speech Recognition Systems on Dysphonic Speech.

Autor: Hidalgo Lopez JC; Emory University School of Medicine, Atlanta, Georgia, USA., Sandeep S; Emory University School of Medicine, Atlanta, Georgia, USA., Wright M; Georgia State University, Atlanta, Georgia, USA., Wandell GM; Department of Otolaryngology-Head & Neck Surgery, University of Washington School of Medicine, Seattle, Washington, USA., Law AB; Emory University School of Medicine, Atlanta, Georgia, USA.
Jazyk: angličtina
Zdroj: Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery [Otolaryngol Head Neck Surg] 2023 May; Vol. 168 (5), pp. 1130-1138. Date of Electronic Publication: 2023 Jan 24.
DOI: 10.1002/ohn.170
Abstrakt: Objective: This study seeks to quantify how current speech recognition systems perform on dysphonic input and if they can be improved.
Study Design: Experimental machine learning methods based on a retrospective database.
Setting: Single academic voice center.
Methods: A database of dysphonic speech recordings was created and tested against 3 speech recognition platforms. Platform performance on dysphonic voice input was compared to platform performance on normal voice input. A custom speech recognition model was trained on voice from patients with spasmodic dysphonia or vocal cord paralysis. Custom model performance was compared to base model performance.
Results: All platforms performed well on normal voice, and 2 platforms performed significantly worse on dysphonic speech. Accuracy metrics on dysphonic speech returned values of 84.55%, 88.57%, and 93.56% for International Business Machines (IBM) Watson, Amazon Transcribe, and Microsoft Azure, respectively. The secondary analysis demonstrated that the lower performance of IBM Watson and Amazon Transcribe was driven by performance on spasmodic dysphonia and vocal fold paralysis. Thus, a custom model was built to increase the accuracy of these pathologies on the Microsoft platform. Overall, the performance of the custom model on dysphonic voices was 96.43% and on normal voices was 97.62%.
Conclusion: Current speech recognition systems generally perform worse on dysphonic speech than on normal speech. We theorize that poor performance is a consequence of a lack of dysphonic voices in each platform's original training dataset. We address this limitation with transfer learning used to increase the performance of these systems on all dysphonic speech.
(© 2023 American Academy of Otolaryngology-Head and Neck Surgery Foundation.)
Databáze: MEDLINE