Zobrazeno 1 - 6
of 6
pro vyhledávání: '"Beitollahi, Mahdi"'
Parameter-efficient fine-tuning stands as the standard for efficiently fine-tuning large language and vision models on downstream tasks. Specifically, the efficiency of low-rank adaptation has facilitated the creation and sharing of hundreds of custo
Externí odkaz:
http://arxiv.org/abs/2402.15414
Autor:
Khalil, Yasser H., Estiri, Amir H., Beitollahi, Mahdi, Asadi, Nader, Hemati, Sobhan, Li, Xu, Zhang, Guojun, Chen, Xi
In the realm of real-world devices, centralized servers in Federated Learning (FL) present challenges including communication bottlenecks and susceptibility to a single point of failure. Additionally, contemporary devices inherently exhibit model and
Externí odkaz:
http://arxiv.org/abs/2402.01863
Autor:
Beitollahi, Mahdi, Bie, Alex, Hemati, Sobhan, Brunswic, Leo Maxime, Li, Xu, Chen, Xi, Zhang, Guojun
In one-shot federated learning (FL), clients collaboratively train a global model in a single round of communication. Existing approaches for one-shot FL enhance communication efficiency at the expense of diminished accuracy. This paper introduces Fe
Externí odkaz:
http://arxiv.org/abs/2402.01862
Autor:
Hemati, Sobhan, Beitollahi, Mahdi, Estiri, Amir Hossein, Omari, Bassel Al, Chen, Xi, Zhang, Guojun
Despite the huge effort in developing novel regularizers for Domain Generalization (DG), adding simple data augmentation to the vanilla ERM which is a practical implementation of the Vicinal Risk Minimization principle (VRM) \citep{chapelle2000vicina
Externí odkaz:
http://arxiv.org/abs/2312.05387
Layer normalization (LN) is a widely adopted deep learning technique especially in the era of foundation models. Recently, LN has been shown to be surprisingly effective in federated learning (FL) with non-i.i.d. data. However, exactly why and how it
Externí odkaz:
http://arxiv.org/abs/2308.09565
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.