Zobrazeno 1 - 10
of 78
pro vyhledávání: '"Salzo, Saverio"'
In this paper, we provide novel optimal (or near optimal) convergence rates in expectation for the last iterate of a clipped version of the stochastic subgradient method. We consider nonsmooth convex problems, over possibly unbounded domains, under h
Externí odkaz:
http://arxiv.org/abs/2410.00573
We study the problem of efficiently computing the derivative of the fixed-point of a parametric nondifferentiable contraction map. This problem has wide applications in machine learning, including hyperparameter optimization, meta-learning and data p
Externí odkaz:
http://arxiv.org/abs/2403.11687
In recent years, bilevel approaches have become very popular to efficiently estimate high-dimensional hyperparameters of machine learning models. However, to date, binary parameters are handled by continuous relaxation and rounding strategies, which
Externí odkaz:
http://arxiv.org/abs/2308.10711
In the context of finite sums minimization, variance reduction techniques are widely used to improve the performance of state-of-the-art stochastic gradient methods. Their practical impact is clear, as well as their theoretical properties. Stochastic
Externí odkaz:
http://arxiv.org/abs/2308.09310
In this work we study high probability bounds for stochastic subgradient methods under heavy tailed noise. In this setting the noise is only assumed to have finite variance as opposed to a sub-Gaussian distribution for which it is known that standard
Externí odkaz:
http://arxiv.org/abs/2208.08567
Publikováno v:
Journal of Machine Learning Research, volume 24, number 167, pages 1-37, year 2023
We analyse a general class of bilevel problems, in which the upper-level problem consists in the minimization of a smooth objective function and the lower-level problem is to find the fixed point of a smooth contraction map. This type of problems inc
Externí odkaz:
http://arxiv.org/abs/2202.03397
In this paper, we study the convergence properties of a randomized block-coordinate descent algorithm for the minimization of a composite convex objective function, where the block-coordinates are updated asynchronously and randomly according to an a
Externí odkaz:
http://arxiv.org/abs/2201.05498
In this work we propose a batch version of the Greenkhorn algorithm for multimarginal regularized optimal transport problems. Our framework is general enough to cover, as particular cases, some existing algorithms like Sinkhorn and Greenkhorn algorit
Externí odkaz:
http://arxiv.org/abs/2112.00838
Autor:
Kostic, Vladimir, Salzo, Saverio
In this work we study the method of Bregman projections for deterministic and stochastic convex feasibility problems with three types of control sequences for the selection of sets during the algorithmic procedure: greedy, random, and adaptive random
Externí odkaz:
http://arxiv.org/abs/2101.01704
Publikováno v:
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics (AISTATS 2021), PMLR 130:3826-3834
Bilevel optimization problems are receiving increasing attention in machine learning as they provide a natural framework for hyperparameter optimization and meta-learning. A key step to tackle these problems is the efficient computation of the gradie
Externí odkaz:
http://arxiv.org/abs/2011.07122