Warming up recurrent neural networks to maximise reachable multistability greatly improves learning.

Autor: Lambrechts G; Montefiore Institute, University of Liège, 10 allée de la découverte, Liège, 4000, Belgium. Electronic address: gaspard.lambrechts@uliege.be., De Geeter F; Montefiore Institute, University of Liège, 10 allée de la découverte, Liège, 4000, Belgium. Electronic address: florent.degeeter@uliege.be., Vecoven N; Montefiore Institute, University of Liège, 10 allée de la découverte, Liège, 4000, Belgium., Ernst D; Montefiore Institute, University of Liège, 10 allée de la découverte, Liège, 4000, Belgium; LTCI, Telecom Paris, Institut Polytechnique de Paris, 19 place Marguerite Perey, Palaiseau, 91120, France. Electronic address: dernst@uliege.be., Drion G; Montefiore Institute, University of Liège, 10 allée de la découverte, Liège, 4000, Belgium. Electronic address: gdrion@uliege.be.
Jazyk: angličtina
Zdroj: Neural networks : the official journal of the International Neural Network Society [Neural Netw] 2023 Sep; Vol. 166, pp. 645-669. Date of Electronic Publication: 2023 Aug 07.
DOI: 10.1016/j.neunet.2023.07.023
Abstrakt: Training recurrent neural networks is known to be difficult when time dependencies become long. In this work, we show that most standard cells only have one stable equilibrium at initialisation, and that learning on tasks with long time dependencies generally occurs once the number of network stable equilibria increases; a property known as multistability. Multistability is often not easily attained by initially monostable networks, making learning of long time dependencies between inputs and outputs difficult. This insight leads to the design of a novel way to initialise any recurrent cell connectivity through a procedure called "warmup" to improve its capability to learn arbitrarily long time dependencies. This initialisation procedure is designed to maximise network reachable multistability, i.e., the number of equilibria within the network that can be reached through relevant input trajectories, in few gradient steps. We show on several information restitution, sequence classification, and reinforcement learning benchmarks that warming up greatly improves learning speed and performance, for multiple recurrent cells, but sometimes impedes precision. We therefore introduce a double-layer architecture initialised with a partial warmup that is shown to greatly improve learning of long time dependencies while maintaining high levels of precision. This approach provides a general framework for improving learning abilities of any recurrent cell when long time dependencies are present. We also show empirically that other initialisation and pretraining procedures from the literature implicitly foster reachable multistability of recurrent cells.
Competing Interests: Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
(Copyright © 2023 Elsevier Ltd. All rights reserved.)
Databáze: MEDLINE