Non-Autoregressive TTS with Explicit Duration Modelling for Low-Resource Highly Expressive Speech
Autor: | Daniel Korzekwa, Goeric Huybrechts, Bartosz Putrycz, Viacheslav Klimkov, Kamil Pokora, Abdelhamid Ezzerg, Thomas Merritt, Raahil Shah |
---|---|
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Sound (cs.SD) Computer Science - Artificial Intelligence Low resource Computer science Speech recognition Computer Science - Sound Machine Learning (cs.LG) Artificial Intelligence (cs.AI) Naturalness Autoregressive model Duration (music) Similarity (psychology) Generative adversarial network |
Zdroj: | 11th ISCA Speech Synthesis Workshop (SSW 11). |
DOI: | 10.21437/ssw.2021-17 |
Popis: | Whilst recent neural text-to-speech (TTS) approaches produce high-quality speech, they typically require a large amount of recordings from the target speaker. In previous work, a 3-step method was proposed to generate high-quality TTS while greatly reducing the amount of data required for training. However, we have observed a ceiling effect in the level of naturalness achievable for highly expressive voices when using this approach. In this paper, we present a method for building highly expressive TTS voices with as little as 15 minutes of speech data from the target speaker. Compared to the current state-of-the-art approach, our proposed improvements close the gap to recordings by 23.3% for naturalness of speech and by 16.3% for speaker similarity. Further, we match the naturalness and speaker similarity of a Tacotron2-based full-data (~10 hours) model using only 15 minutes of target speaker data, whereas with 30 minutes or more, we significantly outperform it. The following improvements are proposed: 1) changing from an autoregressive, attention-based TTS model to a non-autoregressive model replacing attention with an external duration model and 2) an additional Conditional Generative Adversarial Network (cGAN) based fine-tuning step. Comment: 6 pages, 5 figures. Accepted to Speech Synthesis Workshop (SSW) 2021 |
Databáze: | OpenAIRE |
Externí odkaz: |