GANSpeech: Adversarial Training for High-Fidelity Multi-Speaker Speech Synthesis
Autor: | Jae-Sung Bae, Taejun Bak, Hoon-Young Cho, Jinhyeok Yang, Young-Ik Kim |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Sound (cs.SD) Computer Science - Machine Learning Computer Science - Computation and Language Computer science Mean opinion score Speech recognition media_common.quotation_subject Training (meteorology) Sample (statistics) Speech synthesis computer.software_genre Computer Science - Sound Machine Learning (cs.LG) Adversarial system High fidelity Audio and Speech Processing (eess.AS) FOS: Electrical engineering electronic engineering information engineering Active listening Quality (business) Computation and Language (cs.CL) computer media_common Electrical Engineering and Systems Science - Audio and Speech Processing |
Popis: | Recent advances in neural multi-speaker text-to-speech (TTS) models have enabled the generation of reasonably good speech quality with a single model and made it possible to synthesize the speech of a speaker with limited training data. Fine-tuning to the target speaker data with the multi-speaker model can achieve better quality, however, there still exists a gap compared to the real speech sample and the model depends on the speaker. In this work, we propose GANSpeech, which is a high-fidelity multi-speaker TTS model that adopts the adversarial training method to a non-autoregressive multi-speaker TTS model. In addition, we propose simple but efficient automatic scaling methods for feature matching loss used in adversarial training. In the subjective listening tests, GANSpeech significantly outperformed the baseline multi-speaker FastSpeech and FastSpeech2 models, and showed a better MOS score than the speaker-specific fine-tuned FastSpeech2. Accepted to INTERSPEECH 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |