Achieving online regression performance of LSTMs with simple RNNs
Autor: | Selim Firat Yilmaz, N. Mert Vural, Suleyman S. Kozat, Fatih Ilhan, Salih Ergut |
---|---|
Přispěvatelé: | Vural, Nuri Mert, İlhan, Fatih, Yılmaz, Selim Fırat, Kozat, Süleyman Serdar, Vural, N. Mert, Ilhan, Fatih, Yimaz, Selim F. |
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Memory Long-Term Computer Networks and Communications Computer science Machine Learning (stat.ML) Machine learning computer.software_genre Machine Learning (cs.LG) Set (abstract data type) Artificial Intelligence Statistics - Machine Learning Learning Time complexity Vanishing gradient problem business.industry Online gradient descent Regret Neural network training Regression Computer Science Applications Nonlinear system Recurrent neural network Rate of convergence Online learning Artificial intelligence Neural Networks Computer business computer Software Algorithms Recurrent neural networks (RNNs) |
Zdroj: | IEEE Transactions on Neural Networks and Learning Systems IEEE Transactions on Neural Networks and Learning Systems. |
Popis: | Recurrent Neural Networks (RNNs) are widely used for online regression due to their ability to generalize nonlinear temporal dependencies. As an RNN model, Long-Short-Term-Memory Networks (LSTMs) are commonly preferred in practice, as these networks are capable of learning long-term dependencies while avoiding the vanishing gradient problem. However, due to their large number of parameters, training LSTMs requires considerably longer training time compared to simple RNNs (SRNNs). In this paper, we achieve the online regression performance of LSTMs with SRNNs efficiently. To this end, we introduce a first-order training algorithm with a linear time complexity in the number of parameters. We show that when SRNNs are trained with our algorithm, they provide very similar regression performance with the LSTMs in two to three times shorter training time. We provide strong theoretical analysis to support our experimental results by providing regret bounds on the convergence rate of our algorithm. Through an extensive set of experiments, we verify our theoretical work and demonstrate significant performance improvements of our algorithm with respect to LSTMs and the other state-of-the-art learning models. arXiv admin note: substantial text overlap with arXiv:2003.03601 |
Databáze: | OpenAIRE |
Externí odkaz: |