Zobrazeno 1 - 10
of 2 994
pro vyhledávání: '"Canini A"'
The effectiveness of Large Language Models (LLMs) in solving tasks vastly depends on the quality of the instructions, which often require fine-tuning through extensive human effort. This highlights the need for automated instruction optimization; how
Externí odkaz:
http://arxiv.org/abs/2411.12736
Progressing beyond centralized AI is of paramount importance, yet, distributed AI solutions, in particular various federated learning (FL) algorithms, are often not comprehensively assessed, which prevents the research community from identifying the
Externí odkaz:
http://arxiv.org/abs/2407.14154
This work tackles the challenges of data heterogeneity and communication limitations in decentralized federated learning. We focus on creating a collaboration graph that guides each client in selecting suitable collaborators for training personalized
Externí odkaz:
http://arxiv.org/abs/2406.06520
We propose NeuronaBox, a flexible, user-friendly, and high-fidelity approach to emulate DNN training workloads. We argue that to accurately observe performance, it is possible to execute the training workload on a subset of real nodes and emulate the
Externí odkaz:
http://arxiv.org/abs/2405.02969
Autor:
Alballa, Norah, Canini, Marco
This research investigates the enhancement of knowledge distillation (KD) processes in pre-trained models, an emerging field in knowledge transfer with significant implications for distributed training and federated learning environments. These envir
Externí odkaz:
http://arxiv.org/abs/2402.14922
In Federated Learning (FL), forgetting, or the loss of knowledge across rounds, hampers algorithm convergence, particularly in the presence of severe data heterogeneity among clients. This study explores the nuances of this issue, emphasizing the cri
Externí odkaz:
http://arxiv.org/abs/2402.05558
In distributed training, communication often emerges as a bottleneck. In response, we introduce Kimad, a solution that offers adaptive gradient compression. By consistently monitoring bandwidth, Kimad refines compression ratios to match specific neur
Externí odkaz:
http://arxiv.org/abs/2312.08053
Publikováno v:
International Journal of Nanomedicine, Vol 2016, Iss Issue 1, Pp 557-574 (2016)
Angelo Gismondi,1 Valentina Nanni,1 Giacomo Reina,2 Silvia Orlanducci,2 Maria Letizia Terranova,2 Antonella Canini1 1Department of Biology, 2Department of Chemical Science and Technology, University of Rome “Tor Vergata”, Rome, Italy Abstract: Fo
Externí odkaz:
https://doaj.org/article/9403db7aea0c4d94b450abfff03a6174
Efficient distributed training is a principal driver of recent advances in deep learning. However, communication often proves costly and becomes the primary bottleneck in these systems. As a result, there is a demand for the design of efficient commu
Externí odkaz:
http://arxiv.org/abs/2305.18627
Federated learning, an emerging machine learning paradigm, enables clients to collaboratively train a model without exchanging local data. Clients participating in the training process significantly impact the convergence rate, learning efficiency, a
Externí odkaz:
http://arxiv.org/abs/2302.06599