Zobrazeno 1 - 10
of 31
pro vyhledávání: '"Recurrent neural network language models"'
Publikováno v:
IEICE Transactions on Information and Systems. :2557-2567
We describe a novel way to implement subword language models in speech recognition systems based on weighted finite state transducers, hidden Markov models, and deep neural networks. The acoustic models are built on graphemes in a way that no pronunc
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=dedup_wf_001::1d8015504640452d651cf873dbb44a5d
https://aaltodoc.aalto.fi/handle/123456789/47346
https://aaltodoc.aalto.fi/handle/123456789/47346
Publikováno v:
INTERSPEECH
Publikováno v:
Proceedings of the Eighth Workshop on Speech and Language Processing for Assistive Technologies.
Making good letter or word predictions can help accelerate the communication of users of high-tech AAC devices. This is particularly important for real-time person-to-person conversations. We investigate whether per forming speech recognition on the
Publikováno v:
Proceedings of the Eighth Workshop on Speech and Language Processing for Assistive Technologies.
Language models have broad adoption in predictive typing tasks. When the typing history contains numerous errors, as in open-vocabulary predictive typing with brain-computer interface (BCI) systems, we observe significant performance degradation in b
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
ISCSLP
A language model (LM) is an important part of a speech recognition system. Language model adaptation techniques use a large amount of source domain data and limited target domain data to improve the performance of language models in target domain. Ev
Autor:
Yerbolat Khassanov, Eng Siong Chng
Publikováno v:
INTERSPEECH
In automatic speech recognition (ASR) systems, recurrent neural network language models (RNNLM) are used to rescore a word lattice or N-best hypotheses list. Due to the expensive training, the RNNLM's vocabulary set accommodates only small shortlist
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c3022733d864cfc3e113f48d3cedbc25
http://arxiv.org/abs/1806.10306
http://arxiv.org/abs/1806.10306
Publikováno v:
INTERSPEECH
Recurrent neural network language models (RNNLMs) are powerful language modeling techniques. Significant performance improvements have been reported in a range of tasks including speech recognition compared to n-gram language models. Conventional n-g
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2a3b04cb6af22e29075024d8b28d60db
Autor:
John D. Kelleher, Giancarlo D. Salton
Publikováno v:
RANLP
Articles
Articles
Language Models (LMs) are important components in several Natural Language Processing systems. Recurrent Neural Network LMs composed of LSTM units, especially those augmented with an external memory, have achieved state-of-the-art results. However, t
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::eeb446fbf75906d266492c3903566c74