On Expressivity and Trainability of Quadratic Networks.

Autor: Fan FL, Li M, Wang F, Lai R, Wang G
Jazyk: angličtina
Zdroj: IEEE transactions on neural networks and learning systems [IEEE Trans Neural Netw Learn Syst] 2023 Nov 23; Vol. PP. Date of Electronic Publication: 2023 Nov 23.
DOI: 10.1109/TNNLS.2023.3331380
Abstrakt: Inspired by the diversity of biological neurons, quadratic artificial neurons can play an important role in deep learning models. The type of quadratic neurons of our interest replaces the inner-product operation in the conventional neuron with a quadratic function. Despite promising results so far achieved by networks of quadratic neurons, there are important issues not well addressed. Theoretically, the superior expressivity of a quadratic network over either a conventional network or a conventional network via quadratic activation is not fully elucidated, which makes the use of quadratic networks not well grounded. In practice, although a quadratic network can be trained via generic backpropagation, it can be subject to a higher risk of collapse than the conventional counterpart. To address these issues, we first apply the spline theory and a measure from algebraic geometry to give two theorems that demonstrate better model expressivity of a quadratic network than the conventional counterpart with or without quadratic activation. Then, we propose an effective training strategy referred to as referenced linear initialization (ReLinear) to stabilize the training process of a quadratic network, thereby unleashing the full potential in its associated machine learning tasks. Comprehensive experiments on popular datasets are performed to support our findings and confirm the performance of quadratic deep learning. We have shared our code in https://github.com/FengleiFan/ReLinear.
Databáze: MEDLINE