Popis: |
Federated Learning (FL) is a decentralized machine learning (ML) technique that allows a number of participants to train an ML model collaboratively without having to share their private local datasets with others. When participants are unmanned aerial vehicles (UAVs), UAV-enabled FL would experience heterogeneity due to the majorly skewed (non-independent and identically distributed -IID) collected data. In addition, UAVs may demonstrate unintentional misbehavior in which the latter may fail to send updates to the FL server due, for instance, to UAVs' disconnectivity from the FL system caused by high mobility, unavailability, or battery depletion. Such challenges may significantly affect the convergence of the FL model. A recent way to tackle these challenges is client selection, based on customized criteria that consider UAV computing power and energy consumption. However, most existing client selection schemes neglected the participants' reliability. Indeed, FL can be targeted by poisoning attacks, in which malicious UAVs upload poisonous local models to the FL server, by either providing targeted false predictions for specifically chosen inputs or by compromising the global model's accuracy through tampering with the local model. Hence, we propose in this article a novel client selection scheme that enhances convergence by prioritizing fast UAVs with high-reliability scores, while eliminating malicious UAVs from training. Through experiments, we assess the effectiveness of our scheme in resisting different attack scenarios, in terms of convergence and achieved model accuracy. Finally, we demonstrate the performance superiority of the proposed approach compared to baseline methods. |