Autor: |
Susmitha Vekkot, Deepa Gupta, Mohammed Zakariah, Yousef Ajami Alotaibi |
Jazyk: |
angličtina |
Rok vydání: |
2019 |
Předmět: |
|
Zdroj: |
IEEE Access, Vol 7, Pp 81883-81902 (2019) |
Druh dokumentu: |
article |
ISSN: |
2169-3536 |
DOI: |
10.1109/ACCESS.2019.2923003 |
Popis: |
Expressive speech can be synthesized using acoustic feature modeling by mapping the spectral and fundamental frequency parameters between neutral speech and target emotions based on context. Speaker and text-independent emotion conversion are challenging modeling problems in this paradigm. In this paper, spectral mapping using an i-vector-based framework of fixed dimensions is proposed for the speaker-independent emotion conversion, considering the entire problem in the utterance domain, rather than the existing approaches using frame-level processing. The high dimensionality of i-vectors and reduced utterances for i-vector training necessitate the use of Probabilistic Linear Discriminant Analysis (PLDA) to derive the emotion dependent latent vector. The i-vector setup does not require parallel data or alignment procedures at any stage of training. F0 training is conducted on a multilayer feed-forward neural network using limited aligned seed parallel data. The framework is tested on three different languages (datasets) viz. German (EmoDB), Telugu (IITKGP), and English (SAVEE). The proposed approach delivered superior performance compared to the baseline under both clean and noisy data conditions considered for analysis. Under clean data conditions, the proposed model was found to perform better than the baseline with a Mel Cepstral Distortion as low as 3.8 (fear), an F0-RMSE of 26.31 (happiness), and a Perceptual Evaluation of Speech Quality (PESQ) of 3.64 (anger) across datasets. Subjective testing provided a maximum CMOS of 4.10 (anger), 4.44 (fear), and 3.43 (happiness). |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|