Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Patrick Doetsch"'
Autor:
Adria A. Martinez-Villaronga, Adrià Giménez, Hermann Ney, Javier Jorge, Patrick Doetsch, Albert Sanchis, Pavel Golik, Vicent Andreu Císcar, Joan Albert Silvestre-Cerdà, Alfons Juan
Publikováno v:
IberSPEECH 2018
IberSPEECH
IberSPEECH
Publikováno v:
ICASSP
In this work we release our extensible and easily configurable neural network training software. It provides a rich set of functional layers with a particular focus on efficient training of recurrent neural network topologies on multiple GPUs. The so
Publikováno v:
IEEE journal of selected topics in signal processing 11(8), 1265-1273 (2017). doi:10.1109/JSTSP.2017.2752691
IEEE Journal of Selected Topics in Signal Processing
IEEE Journal of Selected Topics in Signal Processing
In this paper, we propose an inverted alignment approach for sequence classification systems like automatic speech recognition (ASR) that naturally incorporates discriminative, artificial-neural-network-based label distributions. Instead of aligning
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::21208efa8a2622958852a4d6ad062651
Publikováno v:
ICFHR
Recurrent neural networks that can be trained end-to-end on sequence learning tasks provide promising benefits over traditional recognition systems. In this paper, we demonstrate the application of an attention-based long short-term memory decoder ne
Handwriting Recognition with Large Multidimensional Long Short-Term Memory Recurrent Neural Networks
Publikováno v:
ICFHR
Multidimensional long short-term memory recurrent neural networks achieve impressive results for handwriting recognition. However, with current CPU-based implementations, their training is very expensive and thus their capacity has so far been limite
Publikováno v:
ICFHR
In this paper, we elaborate the advantages of combining two neural network methodologies, convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent neural networks, with the framework of hybrid hidden Markov models (HMM) for rec
Publikováno v:
ICASSP
We present a comprehensive study of deep bidirectional long short-term memory (LSTM) recurrent neural network (RNN) based acoustic models for automatic speech recognition (ASR). We study the effect of size and depth and train models of up to 8 layers
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::8f03ec28cbe4aefb333547378ba2721e
Publikováno v:
ICDAR
Multiple types of models are used in handwriting recognition and can be broadly categorized into generative and discriminative models. Gaussian Hidden Markov Models are used successfully in most of the systems. Discriminative training can be applied
Publikováno v:
ICDAR
Multiple classifier systems are used to improve baseline results using different strategies. Bagging by design improves standard bagging by the minimization of intersection between the different ensembles. This work proposes the use of design bagging
Publikováno v:
ICASSP
We investigate sequence-discriminative training of long shortterm memory recurrent neural networks using the maximum mutual information criterion. We show that although recurrent neural networks already make use of the whole observation sequence and