Hybrid Hopfield Neural Network

Autor: Cursino, Carla, Dias, Luiz Alberto Vieira
Zdroj: SN Computer Science; February 2024, Vol. 5 Issue: 2
Abstrakt: Hopfield and Tank have shown that a neural network can find solutions for complex optimization problems, although it can be trapped in a local minimum of the objective function returning a suboptimal solution. When the problem has constraints they can be added to the objective function as penalty terms using Lagrange multipliers. In this paper, we introduce an approach inspired by the work of Andrew, Chu, and Gee to implement a neural network to obtain solutions satisfying the linear equality constraints using the Moore–Penrose pseudo inverse matrix to construct a projection matrix to send any configuration to the subspace of configuration space that satisfies all the constraints. The objective function of the problem is modified to include Lagrange multipliers terms for the equations of constraints. Furthermore, we have found that such a condition makes the network converge to a set of stable states even if some diagonal elements of the weight matrix is negative. If after several steps the network does not converge to a stable state, we just solve the problem using simulated annealing that significantly outperforms hill climbing, feed-forward neural network and convolutional neural network. We use this technique to solve the NP-hard Light Up puzzle. Hopfield neural networks are widely used for pattern recognition and optimization tasks. However, the standard Hopfield network model uses non-negative weights between neurons, which can limit its performance in certain situations. By introducing negative weights, the network can potentially learn more complex and nuanced patterns, and exhibit improved convergence properties. Thus, the motivation for the article “Hybrid Hopfield Neural Network” is to explore the benefits of incorporating negative weights into Hopfield networks, and investigate their impact on the performance of the network.
Databáze: Supplemental Index