Taking MT Evaluation Metrics to Extremes: Beyond Correlation with Human Judgments

Autor: Fomicheva, Marina, Specia, Lucia
Jazyk: angličtina
Rok vydání: 2019
Předmět:
Zdroj: Computational Linguistics, Vol 45, Iss 3, Pp 515-558 (2019)
Druh dokumentu: article
ISSN: 0891-2017
1530-9312
DOI: 10.1162/coli_a_00356
Popis: Automatic Machine Translation (MT) evaluation is an active field of research, with a handful of new metrics devised every year. Evaluation metrics are generally benchmarked against manual assessment of translation quality, with performance measured in terms of overall correlation with human scores. Much work has been dedicated to the improvement of evaluation metrics to achieve a higher correlation with human judgments. However, little insight has been provided regarding the weaknesses and strengths of existing approaches and their behavior in different settings. In this work we conduct a broad meta-evaluation study of the performance of a wide range of evaluation metrics focusing on three major aspects. First, we analyze the performance of the metrics when faced with different levels of translation quality, proposing a local dependency measure as an alternative to the standard, global correlation coefficient. We show that metric performance varies significantly across different levels of MT quality: Metrics perform poorly when faced with low-quality translations and are not able to capture nuanced quality distinctions. Interestingly, we show that evaluating low-quality translations is also more challenging for humans. Second, we show that metrics are more reliable when evaluating neural MT than the traditional statistical MT systems. Finally, we show that the difference in the evaluation accuracy for different metrics is maintained even if the gold standard scores are based on different criteria.
Databáze: Directory of Open Access Journals
Nepřihlášeným uživatelům se plný text nezobrazuje