Sequence-To-Sequence Singing Voice Synthesis With Perceptual Entropy Loss
Autor: | Jiatong Shi, Yuekai Zhang, Shuai Guo, Qin Jin, Nan Huo |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Sound (cs.SD) Signal processing Sequence Artificial neural network Computer science Speech recognition media_common.quotation_subject Computer Science - Sound Machine Learning (cs.LG) Data acquisition Audio and Speech Processing (eess.AS) Perception FOS: Electrical engineering electronic engineering information engineering Quality (business) Entropy (energy dispersal) Singing Electrical Engineering and Systems Science - Audio and Speech Processing media_common |
Zdroj: | ICASSP |
DOI: | 10.1109/icassp39728.2021.9414348 |
Popis: | The neural network (NN) based singing voice synthesis (SVS) systems require sufficient data to train well and are prone to over-fitting due to data scarcity. However, we often encounter data limitation problem in building SVS systems because of high data acquisition and annotation costs. In this work, we propose a Perceptual Entropy (PE) loss derived from a psycho-acoustic hearing model to regularize the network. With a one-hour open-source singing voice database, we explore the impact of the PE loss on various mainstream sequence-to-sequence models, including the RNN-based, transformer-based, and conformer-based models. Our experiments show that the PE loss can mitigate the over-fitting problem and significantly improve the synthesized singing quality reflected in objective and subjective evaluations. Comment: Accepted by ICASSP2021 |
Databáze: | OpenAIRE |
Externí odkaz: |