Joint Model-Based Attention for Spoken Language Understanding Task
Autor: | Liu Xin, Qi Ruihua, Lin Shao |
---|---|
Rok vydání: | 2020 |
Předmět: | |
Zdroj: | International Journal of Digital Crime and Forensics. 12:32-43 |
ISSN: | 1941-6229 1941-6210 |
DOI: | 10.4018/ijdcf.2020100103 |
Popis: | Intent determination (ID) and slot filling (SF) are two critical steps in the spoken language understanding (SLU) task. Conventionally, most previous work has been done for each subtask respectively. To exploit the dependencies between intent label and slot sequence, as well as deal with both tasks simultaneously, this paper proposes a joint model (ABLCJ), which is trained by a united loss function. In order to utilize both past and future input features efficiently, a joint model based Bi-LSTM with contextual information is employed to learn the representation of each step, which are shared by two tasks and the model. This paper also uses sentence-level tag information learned from a CRF layer to predict the tag of each slot. Meanwhile, a submodule-based attention is employed to capture global features of a sentence for intent classification. The experimental results demonstrate that ABLCJ achieves competitive performance in the Shared Task 4 of NLPCC 2018. |
Databáze: | OpenAIRE |
Externí odkaz: |