Multilingual representations for low resource speech recognition and keyword search

Autor: Haipeng Wang, Xiaodong Cui, Ralf Schlüter, Mark J. F. Gales, Bhuvana Ramabhadran, Lidia Mangu, Ellen Kislal, Kate Knill, Anton Ragni, Michael Picheny, P.C. Woodland, Markus Nussbaum-Thom, Hermann Ney, Pavel Golik, Abhinav Sethy, Jia Cui, Zoltán Tüske, Brian Kingsbury, Kartik Audhkhasi
Rok vydání: 2018
Předmět:
Zdroj: ASRU
DOI: 10.17863/cam.26564
Popis: © 2015 IEEE. This paper examines the impact of multilingual (ML) acoustic representations on Automatic Speech Recognition (ASR) and keyword search (KWS) for low resource languages in the context of the OpenKWS15 evaluation of the IARPA Babel program. The task is to develop Swahili ASR and KWS systems within two weeks using as little as 3 hours of transcribed data. Multilingual acoustic representations proved to be crucial for building these systems under strict time constraints. The paper discusses several key insights on how these representations are derived and used. First, we present a data sampling strategy that can speed up the training of multilingual representations without appreciable loss in ASR performance. Second, we show that fusion of diverse multilingual representations developed at different LORELEI sites yields substantial ASR and KWS gains. Speaker adaptation and data augmentation of these representations improves both ASR and KWS performance (up to 8.7% relative). Third, incorporating un-transcribed data through semi-supervised learning, improves WER and KWS performance. Finally, we show that these multilingual representations significantly improve ASR and KWS performance (relative 9% for WER and 5% for MTWV) even when forty hours of transcribed audio in the target language is available. Multilingual representations significantly contributed to the LORELEI KWS systems winning the OpenKWS15 evaluation.
Databáze: OpenAIRE