Zobrazeno 1 - 10
of 580
pro vyhledávání: '"Recurrent neural network language models"'
The high memory consumption and computational costs of Recurrent neural network language models (RNNLMs) limit their wider application on resource constrained devices. In recent years, neural network quantization techniques that are capable of produc
Externí odkaz:
http://arxiv.org/abs/2111.14836
Autor:
Davis, Forrest, van Schijndel, Marten
A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i.e. is a grammatical sentence more probable than an ungrammatical sentence). Our work uses ambiguous relative
Externí odkaz:
http://arxiv.org/abs/2005.00165
We present a new theoretical perspective of data noising in recurrent neural network language models (Xie et al., 2017). We show that each variant of data noising is an instance of Bayesian recurrent neural networks with a particular variational dist
Externí odkaz:
http://arxiv.org/abs/1901.09296
Autor:
Khassanov, Yerbolat, Chng, Eng Siong
In automatic speech recognition (ASR) systems, recurrent neural network language models (RNNLM) are used to rescore a word lattice or N-best hypotheses list. Due to the expensive training, the RNNLM's vocabulary set accommodates only small shortlist
Externí odkaz:
http://arxiv.org/abs/1806.10306
Autor:
Tuor, Aaron, Baerwolf, Ryan, Knowles, Nicolas, Hutchinson, Brian, Nichols, Nicole, Jasper, Rob
Automated analysis methods are crucial aids for monitoring and defending a network to protect the sensitive or confidential data it hosts. This work introduces a flexible, powerful, and unsupervised approach to detecting anomalous behavior in compute
Externí odkaz:
http://arxiv.org/abs/1712.00557
We propose BlackOut, an approximation algorithm to efficiently train massive recurrent neural network language models (RNNLMs) with million word vocabularies. BlackOut is motivated by using a discriminative loss, and we describe a new sampling strate
Externí odkaz:
http://arxiv.org/abs/1511.06909
This paper investigates the scaling properties of Recurrent Neural Network Language Models (RNNLMs). We discuss how to train very large RNNs on GPUs and address the questions of how RNNLMs scale with respect to model size, training-set size, computat
Externí odkaz:
http://arxiv.org/abs/1502.00512
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Shi, Yangyang, Larson, Martha, Pelemans, Joris, Jonker, Catholijn M., Wambacq, Patrick, Wiggers, Pascal, Demuynck, Kris
Publikováno v:
In Speech Communication October 2015 73:64-80
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.