Zobrazeno 1 - 10
of 3 384
pro vyhledávání: '"Zhang, Guojun"'
In this paper, we introduce an HPR-LP solver, an implementation of a Halpern Peaceman-Rachford (HPR) method with semi-proximal terms for solving linear programming (LP). The HPR method enjoys the iteration complexity of $O(1/k)$ in terms of the Karus
Externí odkaz:
http://arxiv.org/abs/2408.12179
This paper proposes an efficient HOT algorithm for solving the optimal transport (OT) problems with finite supports. We particularly focus on an efficient implementation of the HOT algorithm for the case where the supports are in $\mathbb{R}^2$ with
Externí odkaz:
http://arxiv.org/abs/2408.00598
In federated learning (FL), the common paradigm that FedAvg proposes and most algorithms follow is that clients train local models with their private data, and the model parameters are shared for central aggregation, mostly averaging. In this paradig
Externí odkaz:
http://arxiv.org/abs/2407.08337
Autor:
Wang, Yihan, Lu, Yiwei, Zhang, Guojun, Boenisch, Franziska, Dziedzic, Adam, Yu, Yaoliang, Gao, Xiao-Shan
Machine unlearning provides viable solutions to revoke the effect of certain training data on pre-trained model parameters. Existing approaches provide unlearning recipes for classification and generative models. However, a category of important mach
Externí odkaz:
http://arxiv.org/abs/2406.03603
In this paper, we aim to accelerate a preconditioned alternating direction method of multipliers (pADMM), whose proximal terms are convex quadratic functions, for solving linearly constrained convex optimization problems. To achieve this, we first re
Externí odkaz:
http://arxiv.org/abs/2403.18618
Parameter-efficient fine-tuning stands as the standard for efficiently fine-tuning large language and vision models on downstream tasks. Specifically, the efficiency of low-rank adaptation has facilitated the creation and sharing of hundreds of custo
Externí odkaz:
http://arxiv.org/abs/2402.15414
In self-supervised contrastive learning, a widely-adopted objective function is InfoNCE, which uses the heuristic cosine similarity for the representation comparison, and is closely related to maximizing the Kullback-Leibler (KL)-based mutual informa
Externí odkaz:
http://arxiv.org/abs/2402.10150
Autor:
He, Yifei, Zhou, Shiji, Zhang, Guojun, Yun, Hyokun, Xu, Yi, Zeng, Belinda, Chilimbi, Trishul, Zhao, Han
Multi-task learning (MTL) considers learning a joint model for multiple tasks by optimizing a convex combination of all task losses. To solve the optimization problem, existing methods use an adaptive weight updating scheme, where task weights are dy
Externí odkaz:
http://arxiv.org/abs/2402.02009
Autor:
Khalil, Yasser H., Estiri, Amir H., Beitollahi, Mahdi, Asadi, Nader, Hemati, Sobhan, Li, Xu, Zhang, Guojun, Chen, Xi
In the realm of real-world devices, centralized servers in Federated Learning (FL) present challenges including communication bottlenecks and susceptibility to a single point of failure. Additionally, contemporary devices inherently exhibit model and
Externí odkaz:
http://arxiv.org/abs/2402.01863
Autor:
Beitollahi, Mahdi, Bie, Alex, Hemati, Sobhan, Brunswic, Leo Maxime, Li, Xu, Chen, Xi, Zhang, Guojun
In one-shot federated learning (FL), clients collaboratively train a global model in a single round of communication. Existing approaches for one-shot FL enhance communication efficiency at the expense of diminished accuracy. This paper introduces Fe
Externí odkaz:
http://arxiv.org/abs/2402.01862