Improved Low-Resource Somali Speech Recognition by Semi-Supervised Acoustic and Language Model Training
Autor: | Raghav Menon, Astik Biswas, Ewald van der Westhuizen, Thomas Niesler |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Perplexity Computer science Speech recognition 02 engineering and technology 01 natural sciences Somali Machine Learning (cs.LG) Audio and Speech Processing (eess.AS) FOS: Electrical engineering electronic engineering information engineering 0202 electrical engineering electronic engineering information engineering Baseline (configuration management) Computer Science - Computation and Language Artificial neural network 010401 analytical chemistry Training (meteorology) Acoustic model 020206 networking & telecommunications language.human_language 0104 chemical sciences Keyword spotting language Language model Computation and Language (cs.CL) Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | INTERSPEECH |
DOI: | 10.21437/interspeech.2019-1328 |
Popis: | We present improvements in automatic speech recognition (ASR) for Somali, a currently extremely under-resourced language. This forms part of a continuing United Nations (UN) effort to employ ASR-based keyword spotting systems to support humanitarian relief programmes in rural Africa. Using just 1.57 hours of annotated speech data as a seed corpus, we increase the pool of training data by applying semi-supervised training to 17.55 hours of untranscribed speech. We make use of factorised time-delay neural networks (TDNN-F) for acoustic modelling, since these have recently been shown to be effective in resource-scarce situations. Three semi-supervised training passes were performed, where the decoded output from each pass was used for acoustic model training in the subsequent pass. The automatic transcriptions from the best performing pass were used for language model augmentation. To ensure the quality of automatic transcriptions, decoder confidence is used as a threshold. The acoustic and language models obtained from the semi-supervised approach show significant improvement in terms of WER and perplexity compared to the baseline. Incorporating the automatically generated transcriptions yields a 6.55\% improvement in language model perplexity. The use of 17.55 hour of Somali acoustic data in semi-supervised training shows an improvement of 7.74\% relative over the baseline. Comment: 5 pages, 6 Tables, 3 figures, 22 references (Accepted at Interspeech 2019) |
Databáze: | OpenAIRE |
Externí odkaz: |