Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis
Autor: | Eric Battenberg, David T. H. Kao, Tom Bagby, Soroosh Mariooryad, RJ Skerry-Ryan, Daisy Stanton, Matt Shannon |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Sound (cs.SD) Computer Science - Computation and Language Mechanism (biology) Computer science Speech recognition 020206 networking & telecommunications Speech synthesis 02 engineering and technology computer.software_genre Computer Science - Sound Machine Learning (cs.LG) 030507 speech-language pathology & audiology 03 medical and health sciences Consistency (database systems) Audio and Speech Processing (eess.AS) 0202 electrical engineering electronic engineering information engineering Key (cryptography) FOS: Electrical engineering electronic engineering information engineering 0305 other medical science computer Computation and Language (cs.CL) Energy (signal processing) Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | ICASSP |
DOI: | 10.48550/arxiv.1910.10288 |
Popis: | Despite the ability to produce human-level speech for in-domain text, attention-based end-to-end text-to-speech (TTS) systems suffer from text alignment failures that increase in frequency for out-of-domain text. We show that these failures can be addressed using simple location-relative attention mechanisms that do away with content-based query/key comparisons. We compare two families of attention mechanisms: location-relative GMM-based mechanisms and additive energy-based mechanisms. We suggest simple modifications to GMM-based attention that allow it to align quickly and consistently during training, and introduce a new location-relative attention mechanism to the additive energy-based family, called Dynamic Convolution Attention (DCA). We compare the various mechanisms in terms of alignment speed and consistency during training, naturalness, and ability to generalize to long utterances, and conclude that GMM attention and DCA can generalize to very long utterances, while preserving naturalness for shorter, in-domain utterances. Comment: Accepted to ICASSP 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |