Faster Convergence in Deep-Predictive-Coding Networks to Learn Deeper Representations.

Autor: Sledge IJ, Principe JC
Jazyk: angličtina
Zdroj: IEEE transactions on neural networks and learning systems [IEEE Trans Neural Netw Learn Syst] 2023 Aug; Vol. 34 (8), pp. 5156-5170. Date of Electronic Publication: 2023 Aug 04.
DOI: 10.1109/TNNLS.2021.3115698
Abstrakt: Deep-predictive-coding networks (DPCNs) are hierarchical, generative models. They rely on feed-forward and feedback connections to modulate latent feature representations of stimuli in a dynamic and context-sensitive manner. A crucial element of DPCNs is a forward-backward inference procedure to uncover sparse, invariant features. However, this inference is a major computational bottleneck. It severely limits the network depth due to learning stagnation. Here, we prove why this bottleneck occurs. We then propose a new forward-inference strategy based on accelerated proximal gradients. This strategy has faster theoretical convergence guarantees than the one used for DPCNs. It overcomes learning stagnation. We also demonstrate that it permits constructing deep and wide predictive-coding networks. Such convolutional networks implement receptive fields that capture well the entire classes of objects on which the networks are trained. This improves the feature representations compared with our lab's previous nonconvolutional and convolutional DPCNs. It yields unsupervised object recognition that surpass convolutional autoencoders and is on par with convolutional networks trained in a supervised manner.
Databáze: MEDLINE