Leveraging Weakly Supervised Data to Improve End-to-End Speech-to-Text Translation

Autor: Chung-Cheng Chiu, Yuan Cao, Ron Weiss, Wolfgang Macherey, Yonghui Wu, Stella Marie Laurenzo, Ye Jia, Naveen Ari, Melvin Johnson
Rok vydání: 2018
Předmět:
FOS: Computer and information sciences
Computer Science - Machine Learning
Sound (cs.SD)
Machine translation
Computer science
Speech recognition
media_common.quotation_subject
Inference
02 engineering and technology
010501 environmental sciences
Overfitting
computer.software_genre
01 natural sciences
Computer Science - Sound
Machine Learning (cs.LG)
End-to-end principle
Audio and Speech Processing (eess.AS)
Speech translation
0202 electrical engineering
electronic engineering
information engineering

FOS: Electrical engineering
electronic engineering
information engineering

Quality (business)
0105 earth and related environmental sciences
media_common
Computer Science - Computation and Language
Training set
ComputingMethodologies_PATTERNRECOGNITION
020201 artificial intelligence & image processing
computer
Computation and Language (cs.CL)
Electrical Engineering and Systems Science - Audio and Speech Processing
Zdroj: ICASSP
DOI: 10.48550/arxiv.1811.02050
Popis: End-to-end Speech Translation (ST) models have many potential advantages when compared to the cascade of Automatic Speech Recognition (ASR) and text Machine Translation (MT) models, including lowered inference latency and the avoidance of error compounding. However, the quality of end-to-end ST is often limited by a paucity of training data, since it is difficult to collect large parallel corpora of speech and translated transcript pairs. Previous studies have proposed the use of pre-trained components and multi-task learning in order to benefit from weakly supervised training data, such as speech-to-transcript or text-to-foreign-text pairs. In this paper, we demonstrate that using pre-trained MT or text-to-speech (TTS) synthesis models to convert weakly supervised data into speech-to-translation pairs for ST training can be more effective than multi-task learning. Furthermore, we demonstrate that a high quality end-to-end ST model can be trained using only weakly supervised datasets, and that synthetic data sourced from unlabeled monolingual text or speech can be used to improve performance. Finally, we discuss methods for avoiding overfitting to synthetic speech with a quantitative ablation study.
Comment: ICASSP 2019
Databáze: OpenAIRE