Convergent Stochastic Almost Natural Gradient Descent
Autor: | Sánchez-López, Borja, Cerquides, Jesús |
---|---|
Rok vydání: | 2019 |
Zdroj: | Digital.CSIC. Repositorio Institucional del CSIC instname |
Popis: | Stochastic Gradient Descent (SGD) is the workhorse beneath the deep learning revolution. However, SGD is known to reduce its convergence speed due to the plateau phenomenon. Stochastic Natural Gradient Descent (SNGD) was proposed by Amari to resolve that problem by taking benefit of the geometry of the space. Nevertheless, the convergence of SNGD is not guaranteed. The aim of this article is to modify SNGD to obtain a convergent variant, that we name Convergent SNGD (CSNGD), and test it in a specific toy optimization problem. In particular, we concentrate on the problem of learning a discrete probability distribution. Based on variable metric convergence results presented by Sunehag et al. [13], we prove the convergence of CSNGD. Furthermore, we provide experimental results showing that it significantly improves over SGD. We claim that the approach developed in this paper could be extensible to more complex optimization problems making it a promising research line. |
Databáze: | OpenAIRE |
Externí odkaz: |