Robust Neural Machine Translation with Doubly Adversarial Inputs
Autor: | Yong Cheng, Lu Jiang, Wolfgang Macherey |
---|---|
Rok vydání: | 2019 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Computation and Language Machine translation Computer science business.industry 02 engineering and technology 010501 environmental sciences Machine learning computer.software_genre 01 natural sciences Adversarial system Robustness (computer science) 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Artificial intelligence business computer Computation and Language (cs.CL) 0105 earth and related environmental sciences BLEU Transformer (machine learning model) |
Zdroj: | ACL (1) |
DOI: | 10.48550/arxiv.1906.02443 |
Popis: | Neural machine translation (NMT) often suffers from the vulnerability to noisy perturbations in the input. We propose an approach to improving the robustness of NMT models, which consists of two parts: (1) attack the translation model with adversarial source examples; (2) defend the translation model with adversarial target inputs to improve its robustness against the adversarial source inputs.For the generation of adversarial inputs, we propose a gradient-based method to craft adversarial examples informed by the translation loss over the clean inputs.Experimental results on Chinese-English and English-German translation tasks demonstrate that our approach achieves significant improvements ($2.8$ and $1.6$ BLEU points) over Transformer on standard clean benchmarks as well as exhibiting higher robustness on noisy data. Comment: Accepted by ACL 2019 |
Databáze: | OpenAIRE |
Externí odkaz: |