Multi-Dialect Speech Recognition With A Single Sequence-To-Sequence Model
Autor: | Eugene Weinstein, Khe Chai Sim, Michiel Bacchiani, Patrick Nguyen, Kanishka Rao, Yonghui Wu, Zhifeng Chen, Bo Li, Tara N. Sainath |
---|---|
Rok vydání: | 2017 |
Předmět: |
FOS: Computer and information sciences
Sequence Sound (cs.SD) Artificial neural network Computer science Speech recognition Grapheme 020206 networking & telecommunications 02 engineering and technology Pronunciation Symbol (chemistry) Computer Science - Sound 030507 speech-language pathology & audiology 03 medical and health sciences Audio and Speech Processing (eess.AS) 0202 electrical engineering electronic engineering information engineering FOS: Electrical engineering electronic engineering information engineering 0305 other medical science Representation (mathematics) Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | ICASSP |
DOI: | 10.48550/arxiv.1712.01541 |
Popis: | Sequence-to-sequence models provide a simple and elegant solution for building speech recognition systems by folding separate components of a typical system, namely acoustic (AM), pronunciation (PM) and language (LM) models into a single neural network. In this work, we look at one such sequence-to-sequence model, namely listen, attend and spell (LAS), and explore the possibility of training a single model to serve different English dialects, which simplifies the process of training multi-dialect systems without the need for separate AM, PM and LMs for each dialect. We show that simply pooling the data from all dialects into one LAS model falls behind the performance of a model fine-tuned on each dialect. We then look at incorporating dialect-specific information into the model, both by modifying the training targets by inserting the dialect symbol at the end of the original grapheme sequence and also feeding a 1-hot representation of the dialect information into all layers of the model. Experimental results on seven English dialects show that our proposed system is effective in modeling dialect variations within a single LAS model, outperforming a LAS model trained individually on each of the seven dialects by 3.1 ~ 16.5% relative. Comment: submitted to ICASSP 2018 |
Databáze: | OpenAIRE |
Externí odkaz: |