Comparison of Speech Representations for Automatic Quality Estimation in Multi-Speaker Text-to-Speech Synthesis
Autor: | Joanna Rownicka, Jennifer Williams, Pilar Oplustil, Simon King |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Reverberation Sound (cs.SD) Computer Science - Machine Learning Computer Science - Computation and Language Artificial neural network Computer science Speech recognition media_common.quotation_subject Mean opinion score Frame (networking) Computer Science - Sound Machine Learning (cs.LG) Correlation Audio and Speech Processing (eess.AS) FOS: Electrical engineering electronic engineering information engineering Spectrogram Quality (business) Noise (video) Computation and Language (cs.CL) media_common Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | Odyssey |
Popis: | We aim to characterize how different speakers contribute to the perceived output quality of multi-speaker Text-to-Speech (TTS) synthesis. We automatically rate the quality of TTS using a neural network (NN) trained on human mean opinion score (MOS) ratings. First, we train and evaluate our NN model on 13 different TTS and voice conversion (VC) systems from the ASVSpoof 2019 Logical Access (LA) Dataset. Since it is not known how best to represent speech for this task, we compare 8 different representations alongside MOSNet frame-based features. Our representations include image-based spectrogram features and x-vector embeddings that explicitly model different types of noise such as T60 reverberation time. Our NN predicts MOS with a high correlation to human judgments. We report prediction correlation and error. A key finding is the quality achieved for certain speakers seems consistent, regardless of the TTS or VC system. It is widely accepted that some speakers give higher quality than others for building a TTS system: our method provides an automatic way to identify such speakers. Finally, to see if our quality prediction models generalize, we predict quality scores for synthetic speech using a separate multi-speaker TTS system that was trained on LibriTTS data, and conduct our own MOS listening test to compare human ratings with our NN predictions. accepted at Speaker Odyssey 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |