Dynamic defense against byzantine poisoning attacks in federated learning
Autor: | Nuria Rodríguez-Barroso, Eugenio Martínez-Cámara, M. Victoria Luzón, Francisco Herrera |
---|---|
Rok vydání: | 2022 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer Science - Cryptography and Security Computer Science - Artificial Intelligence Computer Networks and Communications Adversarial attacks Federated learning Dynamic aggregation operator Deep learning Machine Learning (stat.ML) Byzantine attacks Machine Learning (cs.LG) Artificial Intelligence (cs.AI) Statistics - Machine Learning Hardware and Architecture Cryptography and Security (cs.CR) Software |
Zdroj: | Digibug. Repositorio Institucional de la Universidad de Granada instname |
ISSN: | 0167-739X |
Popis: | Federated learning, as a distributed learning that conducts the training on the local devices without accessing to the training data, is vulnerable to Byzatine poisoning adversarial attacks. We argue that the federated learning model has to avoid those kind of adversarial attacks through filtering out the adversarial clients by means of the federated aggregation operator. We propose a dynamic federated aggregation operator that dynamically discards those adversarial clients and allows to prevent the corruption of the global learning model. We assess it as a defense against adversarial attacks deploying a deep learning classification model in a federated learning setting on the Fed-EMNIST Digits, Fashion MNIST and CIFAR-10 image datasets. The results show that the dynamic selection of the clients to aggregate enhances the performance of the global learning model and discards the adversarial and poor (with low quality models) clients. R&D&I grants - MCIN/AEI, Spain PID-2020-119478GB-I00 PID2020-116118GA-I00 EQC2018-005-084-P ERDF A way of making Europe MCIN/AEI FPU18/04475 IJC2018-036092-I |
Databáze: | OpenAIRE |
Externí odkaz: |