Autor: |
Kollias, Stefanos, Stafylopatis, Andreas, Duch, Włodzisław, Oja, Erkki, Schäfer, Anton Maximilian, Udluft, Steffen, Zimmermann, Hans Georg |
Zdroj: |
Artificial Neural Networks - ICANN 2006; 2006, p71-80, 10p |
Abstrakt: |
Recurrent neural networks (RNNs) unfolded in time are in theory able to map any open dynamical system. Still they are often blamed to be unable to identify long-term dependencies in the data. Especially when they are trained with backpropagation through time (BPTT) it is claimed that RNNs unfolded in time fail to learn inter-temporal influences more than ten time steps apart. This paper provides a disproof of this often cited statement. We show that RNNs and especially normalised recurrent neural networks (NRNNs) unfolded in time are indeed very capable of learning time lags of at least a hundred time steps. We further demonstrate that the problem of a vanishing gradient does not apply to these networks. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|