Autor: |
Hidalgo Lopez, Julio C., Sandeep, Shelly, Wright, MaKayla, Wandell, Grace M., Law, Anthony B. |
Zdroj: |
Otolaryngology-Head & Neck Surgery; May2023, Vol. 168 Issue 5, p1130-1138, 9p |
Abstrakt: |
Objective: This study seeks to quantify how current speech recognition systems perform on dysphonic input and if they can be improved. Study Design: Experimental machine learning methods based on a retrospective database. Setting: Single academic voice center. Methods: A database of dysphonic speech recordings was created and tested against 3 speech recognition platforms. Platform performance on dysphonic voice input was compared to platform performance on normal voice input. A custom speech recognition model was trained on voice from patients with spasmodic dysphonia or vocal cord paralysis. Custom model performance was compared to base model performance. Results: All platforms performed well on normal voice, and 2 platforms performed significantly worse on dysphonic speech. Accuracy metrics on dysphonic speech returned values of 84.55%, 88.57%, and 93.56% for International Business Machines (IBM) Watson, Amazon Transcribe, and Microsoft Azure, respectively. The secondary analysis demonstrated that the lower performance of IBM Watson and Amazon Transcribe was driven by performance on spasmodic dysphonia and vocal fold paralysis. Thus, a custom model was built to increase the accuracy of these pathologies on the Microsoft platform. Overall, the performance of the custom model on dysphonic voices was 96.43% and on normal voices was 97.62%. Conclusion: Current speech recognition systems generally perform worse on dysphonic speech than on normal speech. We theorize that poor performance is a consequence of a lack of dysphonic voices in each platform's original training dataset. We address this limitation with transfer learning used to increase the performance of these systems on all dysphonic speech. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|