Transfer Learning for Context-Aware Spoken Language Understanding
Autor: | Chen, Qian, Zhuo, Zhu, Wang, Wen, Xu, Qiuyun |
---|---|
Rok vydání: | 2020 |
Předmět: | |
Zdroj: | 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), SG, Singapore, 2019, pp. 779-786 |
Druh dokumentu: | Working Paper |
DOI: | 10.1109/ASRU46091.2019.9003902 |
Popis: | Spoken language understanding (SLU) is a key component of task-oriented dialogue systems. SLU parses natural language user utterances into semantic frames. Previous work has shown that incorporating context information significantly improves SLU performance for multi-turn dialogues. However, collecting a large-scale human-labeled multi-turn dialogue corpus for the target domains is complex and costly. To reduce dependency on the collection and annotation effort, we propose a Context Encoding Language Transformer (CELT) model facilitating exploiting various context information for SLU. We explore different transfer learning approaches to reduce dependency on data collection and annotation. In addition to unsupervised pre-training using large-scale general purpose unlabeled corpora, such as Wikipedia, we explore unsupervised and supervised adaptive training approaches for transfer learning to benefit from other in-domain and out-of-domain dialogue corpora. Experimental results demonstrate that the proposed model with the proposed transfer learning approaches achieves significant improvement on the SLU performance over state-of-the-art models on two large-scale single-turn dialogue benchmarks and one large-scale multi-turn dialogue benchmark. Comment: 6 pages, 3 figures, ASRU2019 |
Databáze: | arXiv |
Externí odkaz: |