Variational Characterizations of Local Entropy and Heat Regularization in Deep Learning
Autor: | Nicolas García Trillos, Zachary Kaplan, Daniel Sanz-Alonso |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: | |
Zdroj: | Entropy, Vol 21, Iss 5, p 511 (2019) |
Druh dokumentu: | article |
ISSN: | 1099-4300 21050511 |
DOI: | 10.3390/e21050511 |
Popis: | The aim of this paper is to provide new theoretical and computational understanding on two loss regularizations employed in deep learning, known as local entropy and heat regularization. For both regularized losses, we introduce variational characterizations that naturally suggest a two-step scheme for their optimization, based on the iterative shift of a probability density and the calculation of a best Gaussian approximation in Kullback−Leibler divergence. Disregarding approximation error in these two steps, the variational characterizations allow us to show a simple monotonicity result for training error along optimization iterates. The two-step optimization schemes for local entropy and heat regularized loss differ only over which argument of the Kullback−Leibler divergence is used to find the best Gaussian approximation. Local entropy corresponds to minimizing over the second argument, and the solution is given by moment matching. This allows replacing traditional backpropagation calculation of gradients by sampling algorithms, opening an avenue for gradient-free, parallelizable training of neural networks. However, our presentation also acknowledges the potential increase in computational cost of naive optimization of regularized costs, thus giving a less optimistic view than existing works of the gains facilitated by loss regularization. |
Databáze: | Directory of Open Access Journals |
Externí odkaz: | |
Nepřihlášeným uživatelům se plný text nezobrazuje | K zobrazení výsledku je třeba se přihlásit. |