Using Context in Neural Machine Translation Training Objectives
Autor: | Bill Byrne, Felix Stahlberg, Danielle Saunders |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Scheme (programming language) Machine translation Computer science Sample (statistics) Context (language use) 02 engineering and technology 010501 environmental sciences Machine learning computer.software_genre 01 natural sciences 020204 information systems 0202 electrical engineering electronic engineering information engineering 0105 earth and related environmental sciences BLEU computer.programming_language Sequence Computer Science - Computation and Language business.industry Sampling (statistics) cs.CL Metric (mathematics) Artificial intelligence business Computation and Language (cs.CL) computer Sentence |
Zdroj: | ACL Web of Science |
DOI: | 10.17863/cam.51975 |
Popis: | We present Neural Machine Translation (NMT) training using document-level metrics with batch-level documents. Previous sequence-objective approaches to NMT training focus exclusively on sentence-level metrics like sentence BLEU which do not correspond to the desired evaluation metric, typically document BLEU. Meanwhile research into document-level NMT training focuses on data or model architecture rather than training procedure. We find that each of these lines of research has a clear space in it for the other, and propose merging them with a scheme that allows a document-level evaluation metric to be used in the NMT training objective. We first sample pseudo-documents from sentence samples. We then approximate the expected document BLEU gradient with Monte Carlo sampling for use as a cost function in Minimum Risk Training (MRT). This two-level sampling procedure gives NMT performance gains over sequence MRT and maximum-likelihood training. We demonstrate that training is more robust for document-level metrics than with sequence metrics. We further demonstrate improvements on NMT with TER and Grammatical Error Correction (GEC) using GLEU, both metrics used at the document level for evaluations. ACL 2020 |
Databáze: | OpenAIRE |
Externí odkaz: |