Exemplar-Based Emotive Speech Synthesis
Autor: | Helen Meng, Hui Lu, Yuewen Cao, Shiyin Kang, Xixin Wu, Xunying Liu, Zhiyong Wu, Songxiang Liu |
---|---|
Rok vydání: | 2021 |
Předmět: |
Speech Acoustics
Acoustics and Ultrasonics business.industry Computer science Speech synthesis computer.software_genre Speech enhancement Computational Mathematics Recurrent neural network Emotive Computer Science (miscellaneous) Feature (machine learning) Spectrogram Artificial intelligence Electrical and Electronic Engineering business Hidden Markov model computer Natural language processing |
Zdroj: | IEEE/ACM Transactions on Audio, Speech, and Language Processing. 29:874-886 |
ISSN: | 2329-9304 2329-9290 |
DOI: | 10.1109/taslp.2021.3052688 |
Popis: | Expressive text-to-speech (E-TTS) synthesis is important for enhancing user experience in communication with machines using the speech modality. However, one of the challenges in E-TTS is the lack of a precise description of emotions. Previous categorical specifications may be insufficient for describing complex emotions. The dimensional specifications face the difficulty of ambiguity in annotation. This work advocates a new approach of describing emotive speech acoustics using spoken exemplars. We investigate methods to extract emotion descriptions from the input exemplar of emotive speech. The measures are combined to form two descriptors, based on capsule network (CapNet) and residual error network (RENet). The first is designed to consider the spatial information in the input exemplary spectrogram, and the latter is to capture the contrastive information between emotive acoustic expressions. Two different approaches are applied for conversion from the variable-length feature sequence to fixed-size description vector: (1) dynamic routing groups similar capsules to the output description; and (2) recurrent neural network's hidden states store the temporal information for the description. The two descriptors are integrated to a state-of-the-art sequence-to-sequence architecture to obtain an end-to-end architecture that is optimized as a whole towards the same goal of generating correct emotive speech. Experimental results on a public audiobook dataset demonstrate that the two exemplar-based approaches achieve significant performance improvement over the baseline system in both emotion similarity and speech quality. |
Databáze: | OpenAIRE |
Externí odkaz: |