Autor: |
Michael S. Phillips, Enrico Bocchieri, Iker Arizmendi, Chao Wang, Jay G. Wilpon, Mazin Gilbert, Diamantino Caseiro, Andrej Ljolje, Vincent Goffin |
Rok vydání: |
2011 |
Předmět: |
|
Zdroj: |
INTERSPEECH |
DOI: |
10.21437/interspeech.2011-416 |
Popis: |
A Mobile Virtual Assistant (MVA) is a communication agent that recognizes and understands free speech, and performs actions such as retrieving information and completing transactions. One essential characteristic of MVAs is their ability to learn and adapt without supervision. This paper describes our ongoing research in developing more intelligent MVAs that recognize and understand very large vocabulary speech input across a variety of tasks. In particular, we present our architecture for unsupervised acoustic and language model adaptation. Experimental results show that unsupervised acoustic model learning approaches the performance of supervised learning when adapting on 40-50 device-specific utterances. Unsupervised language model learning results in an 8% absolute drop in word error rate. |
Databáze: |
OpenAIRE |
Externí odkaz: |
|