Zobrazeno 1 - 10
of 589
pro vyhledávání: '"Ding Lijun"'
Augmented Lagrangian Methods (ALMs) are widely employed in solving constrained optimizations, and some efficient solvers are developed based on this framework. Under the quadratic growth assumption, it is known that the dual iterates and the Karush-K
Externí odkaz:
http://arxiv.org/abs/2410.22683
Many practical optimization problems lack strong convexity. Fortunately, recent studies have revealed that first-order algorithms also enjoy linear convergences under various weaker regularity conditions. While the relationship among different condit
Externí odkaz:
http://arxiv.org/abs/2312.16775
Autor:
Ding, Lijun, Wright, Stephen J.
We revisit a formulation technique for inequality constrained optimization problems that has been known for decades: the substitution of squared variables for nonnegative variables. Using this technique, inequality constraints are converted to equali
Externí odkaz:
http://arxiv.org/abs/2310.01784
This paper rigorously shows how over-parameterization changes the convergence behaviors of gradient descent (GD) for the matrix sensing problem, where the goal is to recover an unknown low-rank ground-truth matrix from near-isotropic linear measureme
Externí odkaz:
http://arxiv.org/abs/2310.01769
The spectral bundle method developed by Helmberg and Rendl is well-established for solving large-scale semidefinite programs (SDPs) in the dual form, especially when the SDPs admit $\textit{low-rank primal solutions}$. Under mild regularity condition
Externí odkaz:
http://arxiv.org/abs/2307.07651
Autor:
Ding, Lijun, Wang, Alex L.
We study a sample complexity vs. conditioning tradeoff in modern signal recovery problems (including sparse recovery, low-rank matrix sensing, covariance estimation, and abstract phase retrieval), where convex optimization problems are built from sam
Externí odkaz:
http://arxiv.org/abs/2307.06873
Publikováno v:
Transactions on Machine Learning Research, 2023
Trust-region methods based on Kullback-Leibler divergence are pervasively used to stabilize policy optimization in reinforcement learning. In this paper, we exploit more flexible metrics and examine two natural extensions of policy optimization with
Externí odkaz:
http://arxiv.org/abs/2306.14133
This paper studies the problem of recovering a low-rank matrix from several noisy random linear measurements. We consider the setting where the rank of the ground-truth matrix is unknown a priori and use an objective function built from a rank-oversp
Externí odkaz:
http://arxiv.org/abs/2209.10675
Empirical evidence suggests that for a variety of overparameterized nonlinear models, most notably in neural network training, the growth of the loss around a minimizer strongly impacts its performance. Flat minima -- those around which the loss grow
Externí odkaz:
http://arxiv.org/abs/2203.03756
Publikováno v:
SIAM Journal on Mathematics of Data Science , vol. 5, no. 3, pp. 723-744, 2023
We study the asymmetric matrix factorization problem under a natural nonconvex formulation with arbitrary overparametrization. The model-free setting is considered, with minimal assumption on the rank or singular values of the observed matrix, where
Externí odkaz:
http://arxiv.org/abs/2203.02839