Zobrazeno 1 - 10
of 28
pro vyhledávání: '"Verwimp, Lyan"'
Autor:
Jalota, Rricha, Verwimp, Lyan, Nussbaum-Thom, Markus, Mousa, Amr, Argueta, Arturo, Oualil, Youssef
Neural Network Language Models (NNLMs) for Virtual Assistants (VAs) are generally language-, region-, and in some cases, device-dependent, which increases the effort to scale and maintain them. Combining NNLMs for one or more of the categories is one
Externí odkaz:
http://arxiv.org/abs/2403.18783
On-device automatic speech recognition systems face several challenges compared to server-based systems. They have to meet stricter constraints in terms of speed, disk size and memory while maintaining the same accuracy. Often they have to serve seve
Externí odkaz:
http://arxiv.org/abs/2305.09764
Autor:
Nguyen, Thien, Tran, Nathalie, Deng, Liuhui, da Silva, Thiago Fraga, Radzihovsky, Matthew, Hsiao, Roger, Mason, Henry, Braun, Stefan, McDermott, Erik, Can, Dogan, Swietojanski, Pawel, Verwimp, Lyan, Oyman, Sibel, Arvizo, Tresi, Silovsky, Honza, Ghoshal, Arnab, Martel, Mathieu, Ambati, Bharat Ram, Ali, Mohamed
Code-switching describes the practice of using more than one language in the same sentence. In this study, we investigate how to optimize a neural transducer based bilingual automatic speech recognition (ASR) model for code-switching speech. Focusing
Externí odkaz:
http://arxiv.org/abs/2210.12214
Autor:
Boes, Wim, Van Rompaey, Robbe, Verwimp, Lyan, Pelemans, Joris, Van hamme, Hugo, Wambacq, Patrick
Publikováno v:
ESANN 2020 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (2020) 625-630
We inspect the long-term learning ability of Long Short-Term Memory language models (LSTM LMs) by evaluating a contextual extension based on the Continuous Bag-of-Words (CBOW) model for both sentence- and discourse-level LSTM LMs and by analyzing its
Externí odkaz:
http://arxiv.org/abs/2106.08927
Language models (LMs) for virtual assistants (VAs) are typically trained on large amounts of data, resulting in prohibitively large models which require excessive memory and/or cannot be used to serve user requests in real-time. Entropy pruning resul
Externí odkaz:
http://arxiv.org/abs/2102.07219
Autor:
Verwimp, Lyan, Bellegarda, Jerome R.
Natural language processing (NLP) tasks tend to suffer from a paucity of suitably annotated training data, hence the recent success of transfer learning across a wide variety of them. The typical recipe involves: (i) training a deep, possibly bidirec
Externí odkaz:
http://arxiv.org/abs/1909.04130
Neural cache language models (LMs) extend the idea of regular cache language models by making the cache probability dependent on the similarity between the current context and the context of the words in the cache. We make an extensive comparison of
Externí odkaz:
http://arxiv.org/abs/1809.08826
We present a framework for analyzing what the state in RNNs remembers from its input embeddings. Our approach is inspired by backpropagation, in the sense that we compute the gradients of the states with respect to the input embeddings. The gradient
Externí odkaz:
http://arxiv.org/abs/1805.04264
In Flanders, all TV shows are subtitled. However, the process of subtitling is a very time-consuming one and can be sped up by providing the output of a speech recognizer run on the audio of the TV show, prior to the subtitling. Naturally, this speec
Externí odkaz:
http://arxiv.org/abs/1709.03759
Publikováno v:
European Chapter of the Association for Computational Linguistics (EACL) 2017, Valencia, Spain, pp. 417-427
We present a Character-Word Long Short-Term Memory Language Model which both reduces the perplexity with respect to a baseline word-level language model and reduces the number of parameters of the model. Character information can reveal structural (d
Externí odkaz:
http://arxiv.org/abs/1704.02813