Relative loss bounds for single neurons
Autor: | Manfred K. Warmuth, David P. Helmbold, Jyrki Kivinen |
---|---|
Rok vydání: | 1999 |
Předmět: |
Mathematical optimization
Matching (graph theory) Artificial neural network Computer Networks and Communications MathematicsofComputing_NUMERICALANALYSIS General Medicine Function (mathematics) Backpropagation Computer Science Applications Maxima and minima Stochastic gradient descent Artificial Intelligence Applied mathematics Differentiable function Gradient descent Software Mathematics |
Zdroj: | IEEE Transactions on Neural Networks. 10:1291-1304 |
ISSN: | 1045-9227 |
DOI: | 10.1109/72.809075 |
Popis: | We analyze and compare the well-known gradient descent algorithm and the more recent exponentiated gradient algorithm for training a single neuron with an arbitrary transfer function. Both algorithms are easily generalized to larger neural networks, and the generalization of gradient descent is the standard backpropagation algorithm. In this paper we prove worst-case loss bounds for both algorithms in the single neuron case. Since local minima make it difficult to prove worst-case bounds for gradient-based algorithms, we must use a loss function that prevents the formation of spurious local minima. We define such a matching loss function for any strictly increasing differentiable transfer function and prove worst-case loss bounds for any such transfer function and its corresponding matching loss. For example, the matching loss for the identity function is the square loss and the matching loss for the logistic transfer function is the entropic loss. The different forms of the two algorithms' bounds indicates that exponentiated gradient outperforms gradient descent when the inputs contain a large number of irrelevant components. Simulations on synthetic data confirm these analytical results. |
Databáze: | OpenAIRE |
Externí odkaz: |