Autor: |
Zhang, Han, Elsayed, Medhat, Bavand, Majid, Gaigalas, Raimundas, Ozcan, Yigit, Erol-Kantarci, Melike |
Rok vydání: |
2024 |
Předmět: |
|
Druh dokumentu: |
Working Paper |
Popis: |
Federated learning (FL) allows distributed participants to train machine learning models in a decentralized manner. It can be used for radio signal classification with multiple receivers due to its benefits in terms of privacy and scalability. However, the existing FL algorithms usually suffer from slow and unstable convergence and are vulnerable to poisoning attacks from malicious participants. In this work, we aim to design a versatile FL framework that simultaneously promotes the performance of the model both in a secure system and under attack. To this end, we leverage attention mechanisms as a defense against attacks in FL and propose a robust FL algorithm by integrating the attention mechanisms into the global model aggregation step. To be more specific, two attention models are combined to calculate the amount of attention cast on each participant. It will then be used to determine the weights of local models during the global aggregation. The proposed algorithm is verified on a real-world dataset and it outperforms existing algorithms, both in secure systems and in systems under data poisoning attacks. |
Databáze: |
arXiv |
Externí odkaz: |
|