Rethinking MUSHRA: Addressing Modern Challenges in Text-to-Speech Evaluation

Autor: Varadhan, Praveen Srinivasa, Gulati, Amogh, Sankar, Ashwin, Anand, Srija, Gupta, Anirudh, Mukherjee, Anirudh, Marepally, Shiva Kumar, Bhatia, Ankur, Jaju, Saloni, Bhooshan, Suvrat, Khapra, Mitesh M.
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Despite rapid advancements in TTS models, a consistent and robust human evaluation framework is still lacking. For example, MOS tests fail to differentiate between similar models, and CMOS's pairwise comparisons are time-intensive. The MUSHRA test is a promising alternative for evaluating multiple TTS systems simultaneously, but in this work we show that its reliance on matching human reference speech unduly penalises the scores of modern TTS systems that can exceed human speech quality. More specifically, we conduct a comprehensive assessment of the MUSHRA test, focusing on its sensitivity to factors such as rater variability, listener fatigue, and reference bias. Based on our extensive evaluation involving 471 human listeners across Hindi and Tamil we identify two primary shortcomings: (i) reference-matching bias, where raters are unduly influenced by the human reference, and (ii) judgement ambiguity, arising from a lack of clear fine-grained guidelines. To address these issues, we propose two refined variants of the MUSHRA test. The first variant enables fairer ratings for synthesized samples that surpass human reference quality. The second variant reduces ambiguity, as indicated by the relatively lower variance across raters. By combining these approaches, we achieve both more reliable and more fine-grained assessments. We also release MANGO, a massive dataset of 47,100 human ratings, the first-of-its-kind collection for Indian languages, aiding in analyzing human preferences and developing automatic metrics for evaluating TTS systems.
Comment: 19 pages, 12 Figures
Databáze: arXiv