Zobrazeno 1 - 10
of 93
pro vyhledávání: '"90C25, 90C06"'
Autor:
Kabgani, Alireza, Ahookhosh, Masoud
This paper introduces an inexact two-level smoothing optimization framework (ItsOPT)} for finding first-order critical points of nonsmooth and nonconvex functions. The framework involves two levels of methodologies: at the upper level, a first- or se
Externí odkaz:
http://arxiv.org/abs/2410.19928
Autor:
Tran-Dinh, Quoc, Nguyen-Trung, Nghia
This paper presents a comprehensive analysis of the well-known extragradient (EG) method for solving both equations and inclusions. First, we unify and generalize EG for [non]linear equations to a wider class of algorithms, encompassing various exist
Externí odkaz:
http://arxiv.org/abs/2409.16859
Autor:
Tran-Dinh, Quoc
We propose a novel class of Nesterov's stochastic accelerated forward-reflected-based methods with variance reduction to solve root-finding problems under $\frac{1}{L}$-co-coerciveness. Our algorithm is single-loop and leverages a new family of unbia
Externí odkaz:
http://arxiv.org/abs/2406.02413
Autor:
Tran-Dinh, Quoc
We develop two novel stochastic variance-reduction methods to approximate a solution of root-finding problems applicable to both equations and inclusions. Our algorithms leverage a new combination of ideas from the forward-reflected-backward splittin
Externí odkaz:
http://arxiv.org/abs/2406.00937
Autor:
Ran, Yifan
In this work, we solve a 49-year open problem, the general optimal step-size for ADMM-type algorithms. For a convex program: $\text{min.} \,\, f({x}) + g({z})$, $\text{s.t.}\, {A}{x} - {B}{z} = {c} $, given an arbitrary fixed-point initialization $ {
Externí odkaz:
http://arxiv.org/abs/2309.10124
We present Scaff-PD, a fast and communication-efficient algorithm for distributionally robust federated learning. Our approach improves fairness by optimizing a family of distributionally robust objectives tailored to heterogeneous clients. We levera
Externí odkaz:
http://arxiv.org/abs/2307.13381
Due to the high communication overhead when training machine learning models in a distributed environment, modern algorithms invariably rely on lossy communication compression. However, when untreated, the errors caused by compression propagate, and
Externí odkaz:
http://arxiv.org/abs/2305.15155
Autor:
Amruth, Srivathsan, Lam, Xin Yee
In many modern data sets, High dimension low sample size (HDLSS) data is prevalent in many fields of studies. There has been an increased focus recently on using machine learning and statistical methods to mine valuable information out of these data
Externí odkaz:
http://arxiv.org/abs/2305.12019
Autor:
Staudigl, Mathias, Jacquot, Paulin
We develop a novel randomised block coordinate primal-dual algorithm for a class of non-smooth ill-posed convex programs. Lying in the midway between the celebrated Chambolle-Pock primal-dual algorithm and Tseng's accelerated proximal gradient method
Externí odkaz:
http://arxiv.org/abs/2212.12045
State-of-the-art federated learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions. For neural networks, even when centralized SGD easily finds a solution that is simultaneously perfo
Externí odkaz:
http://arxiv.org/abs/2207.06343