Autor: |
Li-Chia Chang, Jeih-Weih Hung |
Jazyk: |
angličtina |
Rok vydání: |
2022 |
Předmět: |
|
Zdroj: |
Applied System Innovation, Vol 5, Iss 4, p 71 (2022) |
Druh dokumentu: |
article |
ISSN: |
2571-5577 |
DOI: |
10.3390/asi5040071 |
Popis: |
This study proposes a novel robust speech feature extraction technique to improve speech recognition performance in noisy environments. This novel method exploits the information provided by the original acoustic model in the automatic speech recognition (ASR) system to learn a deep neural network that converts the original speech features. This deep neural network is trained to maximize the posterior accuracy of the state sequences of acoustic models with respect to the speech feature sequences. Compared with the robustness methods that retrain or adapt acoustic models, the new method has the advantages of less computation load and faster training. In the experiments conducted on the medium-vocabulary TIMIT database and task, the presented method provides lower word error rates than the unprocessed baseline and speech-enhancement-based techniques. These results indicate that the presented method is promising and worth further developing. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|