Sequence-To-Sequence Singing Voice Synthesis With Perceptual Entropy Loss

Autor: Jiatong Shi, Yuekai Zhang, Shuai Guo, Qin Jin, Nan Huo
Rok vydání: 2021
Předmět:
Zdroj: ICASSP
DOI: 10.1109/icassp39728.2021.9414348
Popis: The neural network (NN) based singing voice synthesis (SVS) systems require sufficient data to train well and are prone to over-fitting due to data scarcity. However, we often encounter data limitation problem in building SVS systems because of high data acquisition and annotation costs. In this work, we propose a Perceptual Entropy (PE) loss derived from a psycho-acoustic hearing model to regularize the network. With a one-hour open-source singing voice database, we explore the impact of the PE loss on various mainstream sequence-to-sequence models, including the RNN-based, transformer-based, and conformer-based models. Our experiments show that the PE loss can mitigate the over-fitting problem and significantly improve the synthesized singing quality reflected in objective and subjective evaluations.
Comment: Accepted by ICASSP2021
Databáze: OpenAIRE