Zobrazeno 1 - 10
of 8 680
pro vyhledávání: '"A. Grimmer"'
Autor:
Wu, Yue, Grimmer, Benjamin
This work considers the nonconvex, nonsmooth problem of minimizing a composite objective of the form $f(g(x))+h(x)$ where the inner mapping $g$ is a smooth finite summation or expectation amenable to variance reduction. In such settings, prox-linear
Externí odkaz:
http://arxiv.org/abs/2412.15008
The study of unconstrained convex optimization has historically been concerned with worst-case a priori convergence rates. The development of the Optimized Gradient Method (OGM), due to Drori and Teboulle, Kim and Fessler, marked a major milestone in
Externí odkaz:
http://arxiv.org/abs/2412.06731
Recent works by Altschuler and Parrilo and the authors have shown that it is possible to accelerate the convergence of gradient descent on smooth convex functions, even without momentum, just by picking special stepsizes. In this paper, we provide a
Externí odkaz:
http://arxiv.org/abs/2410.16249
Autor:
Luner, Alan, Grimmer, Benjamin
We extend recent computer-assisted design and analysis techniques for first-order optimization over structured functions--known as performance estimation--to apply to structured sets. We prove "interpolation theorems" for smooth and strongly convex s
Externí odkaz:
http://arxiv.org/abs/2410.14811
We introduce AutoPersuade, a three-part framework for constructing persuasive messages. First, we curate a large dataset of arguments with human evaluations. Next, we develop a novel topic model to identify argument features that influence persuasive
Externí odkaz:
http://arxiv.org/abs/2410.08917
Autor:
Grimmer, Marcel, Busch, Christoph
Face morphing attacks pose a severe security threat to face recognition systems, enabling the morphed face image to be verified against multiple identities. To detect such manipulated images, the development of new face morphing methods becomes essen
Externí odkaz:
http://arxiv.org/abs/2410.07988
Drori and Teboulle [4] conjectured that the minimax optimal constant stepsize for N steps of gradient descent is given by the stepsize that balances performance on Huber and quadratic objective functions. This was numerically supported by semidefinit
Externí odkaz:
http://arxiv.org/abs/2407.11739
This work considers gradient descent for L-smooth convex optimization with stepsizes larger than the classic regime where descent can be ensured. The stepsize schedules considered are similar to but differ slightly from the recent silver stepsizes of
Externí odkaz:
http://arxiv.org/abs/2403.14045
Autor:
Samakhoana, Thabo, Grimmer, Benjamin
Recent works have developed new projection-free first-order methods based on utilizing linesearches and normal vector computations to maintain feasibility. These oracles can be cheaper than orthogonal projection or linear optimization subroutines but
Externí odkaz:
http://arxiv.org/abs/2403.13688
Autor:
Luner, Alan, Grimmer, Benjamin
This work considers the effect of averaging, and more generally extrapolation, of the iterates of gradient descent in smooth convex optimization. After running the method, rather than reporting the final iterate, one can report either a convex combin
Externí odkaz:
http://arxiv.org/abs/2402.12493