Cross-lingual Intermediate Fine-tuning improves Dialogue State Tracking

Autor: Nikita Moghe, Mark Steedman, Alexandra Birch
Přispěvatelé: Moens, Marie-Francine, Huang, Xuanjing, Specia, Lucia, Wen-tau Yih, Scott
Rok vydání: 2021
Předmět:
Zdroj: Moghe, N, Birch-Mayne, A & Steedman, M 2021, Cross-lingual Intermediate Fine-tuning improves Dialogue State Tracking . in M-F Moens, X Huang, L Specia & S Wen-tau Yih (eds), Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing . Stroudsburg, PA, United States, pp. 1137-1150, 2021 Conference on Empirical Methods in Natural Language Processing, Punta Cana, Dominican Republic, 7/11/21 . < https://aclanthology.org/2021.emnlp-main.87/ >
DOI: 10.48550/arxiv.2109.13620
Popis: Recent progress in task-oriented neural dialogue systems is largely focused on a handful of languages, as annotation of training data is tedious and expensive. Machine translation has been used to make systems multilingual, but this can introduce a pipeline of errors. Another promising solution is using cross-lingual transfer learning through pretrained multilingual models. Existing methods train multilingual models with additional code-mixed task data or refine the cross-lingual representations through parallel ontologies. In this work, we enhance the transfer learning process by intermediate fine-tuning of pretrained multilingual models, where the multilingual models are fine-tuned with different but related data and/or tasks. Specifically, we use parallel and conversational movie subtitles datasets to design cross-lingual intermediate tasks suitable for downstream dialogue tasks. We use only 200K lines of parallel data for intermediate fine-tuning which is already available for 1782 language pairs. We test our approach on the cross-lingual dialogue state tracking task for the parallel MultiWoZ (English -> Chinese, Chinese -> English) and Multilingual WoZ (English -> German, English -> Italian) datasets. We achieve impressive improvements (> 20% on joint goal accuracy) on the parallel MultiWoZ dataset and the Multilingual WoZ dataset over the vanilla baseline with only 10% of the target language task data and zero-shot setup respectively.
EMNLP 2021 Camera Ready
Databáze: OpenAIRE