Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Terjék, Dávid"'
Autor:
Terjék, Dávid, González-Sánchez, Diego
A candidate explanation of the good empirical performance of deep neural networks is the implicit regularization effect of first order optimization methods. Inspired by this, we prove a convergence theorem for nonconvex composite optimization, and ap
Externí odkaz:
http://arxiv.org/abs/2205.13507
Autor:
Terjék, Dávid
We propose a family of extensions of the Kantorovich-Rubinstein norm from the space of zero-charge countably additive measures on a compact metric space to the space of all countably additive measures, and a family of extensions of the Lipschitz norm
Externí odkaz:
http://arxiv.org/abs/2107.02725
Autor:
Terjék, Dávid, González-Sánchez, Diego
Publikováno v:
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:5135-5165, 2022
Entropic regularization provides a generalization of the original optimal transport problem. It introduces a penalty term defined by the Kullback-Leibler divergence, making the problem more tractable via the celebrated Sinkhorn algorithm. Replacing t
Externí odkaz:
http://arxiv.org/abs/2105.14337
Autor:
Terjék, Dávid
Publikováno v:
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:10214-10224, 2021
Variational representations of $f$-divergences are central to many machine learning algorithms, with Lipschitz constrained variants recently gaining attention. Inspired by this, we define the Moreau-Yosida approximation of $f$-divergences with respec
Externí odkaz:
http://arxiv.org/abs/2102.13416
Autor:
Terjék, Dávid
In this note, following \cite{Chitescuetal2014}, we show that the Monge-Kantorovich norm on the vector space of countably additive measures on a compact metric space has a primal representation analogous to the Hanin norm, meaning that similarly to t
Externí odkaz:
http://arxiv.org/abs/2102.12280
Autor:
Terjék, Dávid
Generative adversarial networks (GANs) are one of the most popular approaches when it comes to training generative models, among which variants of Wasserstein GANs are considered superior to the standard GAN formulation in terms of learning stability
Externí odkaz:
http://arxiv.org/abs/1907.05681