Zobrazeno 1 - 10
of 5 313
pro vyhledávání: '"minimax problems"'
Autor:
Wang, Jun-Lin, Xu, Zi
In this paper, we study second-order algorithms for solving nonconvex-strongly concave minimax problems, which have attracted much attention in recent years in many fields, especially in machine learning. We propose a gradient norm regularized trust
Externí odkaz:
http://arxiv.org/abs/2411.15769
Minimax problems have achieved success in machine learning such as adversarial training, robust optimization, reinforcement learning. For theoretical analysis, current optimal excess risk bounds, which are composed by generalization error and optimiz
Externí odkaz:
http://arxiv.org/abs/2410.08497
In recent years, there has been considerable interest in designing stochastic first-order algorithms to tackle finite-sum smooth minimax problems. To obtain the gradient estimates, one typically relies on the uniform sampling-with-replacement scheme
Externí odkaz:
http://arxiv.org/abs/2410.04761
Recently, there has been growing interest in minimax problems on Riemannian manifolds due to their wide applications in machine learning and signal processing. Although many algorithms have been developed for minimax problems in the Euclidean setting
Externí odkaz:
http://arxiv.org/abs/2409.19588
Due to their importance in various emerging applications, efficient algorithms for solving minimax problems have recently received increasing attention. However, many existing algorithms require prior knowledge of the problem parameters in order to a
Externí odkaz:
http://arxiv.org/abs/2407.21372
In this paper, we study second-order algorithms for the convex-concave minimax problem, which has attracted much attention in many fields such as machine learning in recent years. We propose a Lipschitz-free cubic regularization (LF-CR) algorithm for
Externí odkaz:
http://arxiv.org/abs/2407.03571
We consider double-regularized nonconvex-strongly concave (NCSC) minimax problems of the form $(P):\min_{x\in\mathcal{X}} \max_{y\in\mathcal{Y}}g(x)+f(x,y)-h(y)$, where $g$, $h$ are closed convex, $f$ is $L$-smooth in $(x,y)$ and strongly concave in
Externí odkaz:
http://arxiv.org/abs/2406.14371
Stochastic smooth nonconvex minimax problems are prevalent in machine learning, e.g., GAN training, fair classification, and distributionally robust learning. Stochastic gradient descent ascent (GDA)-type methods are popular in practice due to their
Externí odkaz:
http://arxiv.org/abs/2405.14130
Autor:
Scagliotti, Alessandro
In this paper, we consider ensembles of control-affine systems in $\mathbb{R}^d$, and we study simultaneous optimal control problems related to the worst-case minimization. After proving that such problems admit solutions, denoting with $(\Theta^N)_N
Externí odkaz:
http://arxiv.org/abs/2405.05782
We propose a stochastic GDA (gradient descent ascent) method with backtracking (SGDA-B) to solve nonconvex-(strongly) concave (NCC) minimax problems $\min_x \max_y \sum_{i=1}^N g_i(x_i)+f(x,y)-h(y)$, where $h$ and $g_i$ for $i = 1, \ldots, N$ are clo
Externí odkaz:
http://arxiv.org/abs/2403.07806