Zobrazeno 1 - 10
of 3 032
pro vyhledávání: '"Zhang, Guojun"'
Autor:
Wang, Yihan, Lu, Yiwei, Zhang, Guojun, Boenisch, Franziska, Dziedzic, Adam, Yu, Yaoliang, Gao, Xiao-Shan
Machine unlearning provides viable solutions to revoke the effect of certain training data on pre-trained model parameters. Existing approaches provide unlearning recipes for classification and generative models. However, a category of important mach
Externí odkaz:
http://arxiv.org/abs/2406.03603
In this paper, we aim to accelerate a preconditioned alternating direction method of multipliers (pADMM), whose proximal terms are convex quadratic functions, for solving linearly constrained convex optimization problems. To achieve this, we first re
Externí odkaz:
http://arxiv.org/abs/2403.18618
Parameter-efficient fine-tuning stands as the standard for efficiently fine-tuning large language and vision models on downstream tasks. Specifically, the efficiency of low-rank adaptation has facilitated the creation and sharing of hundreds of custo
Externí odkaz:
http://arxiv.org/abs/2402.15414
In self-supervised contrastive learning, a widely-adopted objective function is InfoNCE, which uses the heuristic cosine similarity for the representation comparison, and is closely related to maximizing the Kullback-Leibler (KL)-based mutual informa
Externí odkaz:
http://arxiv.org/abs/2402.10150
Autor:
He, Yifei, Zhou, Shiji, Zhang, Guojun, Yun, Hyokun, Xu, Yi, Zeng, Belinda, Chilimbi, Trishul, Zhao, Han
Multi-task learning (MTL) considers learning a joint model for multiple tasks by optimizing a convex combination of all task losses. To solve the optimization problem, existing methods use an adaptive weight updating scheme, where task weights are dy
Externí odkaz:
http://arxiv.org/abs/2402.02009
Autor:
Khalil, Yasser H., Estiri, Amir H., Beitollahi, Mahdi, Asadi, Nader, Hemati, Sobhan, Li, Xu, Zhang, Guojun, Chen, Xi
In the realm of real-world devices, centralized servers in Federated Learning (FL) present challenges including communication bottlenecks and susceptibility to a single point of failure. Additionally, contemporary devices inherently exhibit model and
Externí odkaz:
http://arxiv.org/abs/2402.01863
Autor:
Beitollahi, Mahdi, Bie, Alex, Hemati, Sobhan, Brunswic, Leo Maxime, Li, Xu, Chen, Xi, Zhang, Guojun
In one-shot federated learning (FL), clients collaboratively train a global model in a single round of communication. Existing approaches for one-shot FL enhance communication efficiency at the expense of diminished accuracy. This paper introduces Fe
Externí odkaz:
http://arxiv.org/abs/2402.01862
Federated Learning (FL) involves training a model over a dataset distributed among clients, with the constraint that each client's dataset is localized and possibly heterogeneous. In FL, small and noisy datasets are common, highlighting the need for
Externí odkaz:
http://arxiv.org/abs/2312.09817
Autor:
Hemati, Sobhan, Beitollahi, Mahdi, Estiri, Amir Hossein, Omari, Bassel Al, Chen, Xi, Zhang, Guojun
Despite the huge effort in developing novel regularizers for Domain Generalization (DG), adding simple data augmentation to the vanilla ERM which is a practical implementation of the Vicinal Risk Minimization principle (VRM) \citep{chapelle2000vicina
Externí odkaz:
http://arxiv.org/abs/2312.05387
Discriminatively trained, deterministic neural networks are the de facto choice for classification problems. However, even though they achieve state-of-the-art results on in-domain test sets, they tend to be overconfident on out-of-distribution (OOD)
Externí odkaz:
http://arxiv.org/abs/2311.03683