Zobrazeno 1 - 10
of 166
pro vyhledávání: '"Huang, Feihu"'
Autor:
Huang, Feihu, Zhao, Jianyu
Decentralized learning recently has received increasing attention in machine learning due to its advantages in implementation simplicity and system robustness, data privacy. Meanwhile, the adaptive gradient methods show superior performances in many
Externí odkaz:
http://arxiv.org/abs/2408.09775
Autor:
Huang, Feihu
Bilevel optimization is widely applied in many machine learning tasks such as hyper-parameter learning, meta learning and reinforcement learning. Although many algorithms recently have been developed to solve the bilevel optimization problems, they g
Externí odkaz:
http://arxiv.org/abs/2407.17823
Autor:
Huang, Feihu
In the paper, we propose a class of efficient adaptive bilevel methods based on mirror descent for nonconvex bilevel optimization, where its upper-level problem is nonconvex possibly with nonsmooth regularization, and its lower-level problem is also
Externí odkaz:
http://arxiv.org/abs/2311.04520
Autor:
Huang, Feihu, Chen, Songcan
Minimax optimization plays an important role in many machine learning tasks such as generative adversarial networks (GANs) and adversarial training. Although recently a wide variety of optimization methods have been proposed to solve the minimax prob
Externí odkaz:
http://arxiv.org/abs/2304.10902
Autor:
Huang, Feihu
In the paper, we study a class of nonconvex nonconcave minimax optimization problems (i.e., $\min_x\max_y f(x,y)$), where $f(x,y)$ is possible nonconvex in $x$, and it is nonconcave and satisfies the Polyak-Lojasiewicz (PL) condition in $y$. Moreover
Externí odkaz:
http://arxiv.org/abs/2303.03984
Autor:
Huang, Feihu
Bilevel optimization is a popular two-level hierarchical optimization, which has been widely applied to many machine learning tasks such as hyperparameter learning, meta learning and continual learning. Although many bilevel optimization methods rece
Externí odkaz:
http://arxiv.org/abs/2303.03944
Bilevel Optimization has witnessed notable progress recently with new emerging efficient algorithms. However, its application in the Federated Learning setting remains relatively underexplored, and the impact of Federated Learning's inherent challeng
Externí odkaz:
http://arxiv.org/abs/2302.06701
Federated learning (FL) is an emerging learning paradigm to tackle massively distributed data. In Federated Learning, a set of clients jointly perform a machine learning task under the coordination of a server. The FedAvg algorithm is one of the most
Externí odkaz:
http://arxiv.org/abs/2302.06103
Federated learning has attracted increasing attention with the emergence of distributed data. While extensive federated learning algorithms have been proposed for the non-convex distributed problem, federated learning in practice still faces numerous
Externí odkaz:
http://arxiv.org/abs/2212.00974
Federated learning is a popular distributed and privacy-preserving learning paradigm in machine learning. Recently, some federated learning algorithms have been proposed to solve the distributed minimax problems. However, these federated minimax algo
Externí odkaz:
http://arxiv.org/abs/2211.07303