Zobrazeno 1 - 1
of 1
pro vyhledávání: '"Koo, Jabin"'
Federated fine-tuning for Large Language Models (LLMs) has recently gained attention due to the heavy communication overhead of transmitting large model updates. Low Rank Adaptation (LoRA) has been proposed as a solution, yet its application in feder
Externí odkaz:
http://arxiv.org/abs/2410.22815