A Study in Improving BLEU Reference Coverage with Diverse Automatic Paraphrasing

Autor: Matt Post, Rachel Bawden, Biao Zhang, Lisa Yankovskaya, Andre Tättar
Přispěvatelé: School of Informatics [Edimbourg], University of Edinburgh, Institute of Computer Science [University of Tartu, Estonie], University of Tartu, Johns Hopkins University (JHU)
Jazyk: angličtina
Rok vydání: 2020
Předmět:
Zdroj: Bawden, R, Zhang, B, Yankovskaya, L, Tättar, A & Post, M 2020, A Study in Improving BLEU Reference Coverage with Diverse Automatic Paraphrasing . in Findings of the Association for Computational Linguistics: EMNLP 2020 . pp. 918-932, The 2020 Conference on Empirical Methods in Natural Language Processing, Virtual conference, 16/11/20 . < https://www.aclweb.org/anthology/2020.findings-emnlp.82 >
EMNLP (Findings)
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings
2020 Conference on Empirical Methods in Natural Language Processing: Findings
2020 Conference on Empirical Methods in Natural Language Processing: Findings, 2020, Punta Cana (online), Dominican Republic
Findings of the Association for Computational Linguistics: EMNLP 2020
HAL
Popis: We investigate a long-perceived shortcoming in the typical use of BLEU: its reliance on a single reference. Using modern neural paraphrasing techniques, we study whether automatically generating additional diverse references can provide better coverage of the space of valid translations and thereby improve its correlation with human judgments. Our experiments on the into-English language directions of the WMT19 metrics task (at both the system and sentence level) show that using paraphrased references does generally improve BLEU, and when it does, the more diverse the better. However, we also show that better results could be achieved if those paraphrases were to specifically target the parts of the space most relevant to the MT outputs being evaluated. Moreover, the gains remain slight even when human paraphrases are used, suggesting inherent limitations to BLEU's capacity to correctly exploit multiple references. Surprisingly, we also find that adequacy appears to be less important, as shown by the high results of a strong sampling approach, which even beats human paraphrases when used with sentence-level BLEU.
Accepted in the Findings of EMNLP 2020
Databáze: OpenAIRE