End-to-end architectures for ASR-free spoken language understanding
Autor: | Petr Mizera, Ioannis Gkinis, Themos Stafylakis, Elisavet Palogiannidi, George Mastrapas |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
Computer science
Speech recognition 05 social sciences Natural language understanding 010501 environmental sciences Object (computer science) computer.software_genre 01 natural sciences Recurrent neural network End-to-end principle Audio and Speech Processing (eess.AS) 0502 economics and business FOS: Electrical engineering electronic engineering information engineering 050207 economics Set (psychology) computer 0105 earth and related environmental sciences Spoken language Meaning (linguistics) Electrical Engineering and Systems Science - Audio and Speech Processing |
Zdroj: | ICASSP |
Popis: | Spoken Language Understanding (SLU) is the problem of extracting the meaning from speech utterances. It is typically addressed as a two-step problem, where an Automatic Speech Recognition (ASR) model is employed to convert speech into text, followed by a Natural Language Understanding (NLU) model to extract meaning from the decoded text. Recently, end-to-end approaches were emerged, aiming at unifying the ASR and NLU into a single SLU deep neural architecture, trained using combinations of ASR and NLU-level recognition units. In this paper, we explore a set of recurrent architectures for intent classification, tailored to the recently introduced Fluent Speech Commands (FSC) dataset, where intents are formed as combinations of three slots (action, object, and location). We show that by combining deep recurrent architectures with standard data augmentation, state-of-the-art results can be attained, without using ASR-level targets or pretrained ASR models. We also investigate its generalizability to new wordings, and we show that the model can perform reasonably well on wordings unseen during training. Accepted at ICASSP-2020 |
Databáze: | OpenAIRE |
Externí odkaz: |