Secure Aggregation Against Malicious Users
Autor: | Ferhat Karakoç, Melek Önen, Zeki Bilgin |
---|---|
Přispěvatelé: | Ericsson Research, Eurecom [Sophia Antipolis], Arçelik Research, ANR-19-P3IA-0002,3IA@cote d'azur,3IA Côte d'Azur(2019) |
Jazyk: | angličtina |
Rok vydání: | 2021 |
Předmět: |
Correctness
Computer science business.industry media_common.quotation_subject Aggregate (data warehouse) Access control 02 engineering and technology [INFO.INFO-IA]Computer Science [cs]/Computer Aided Engineering computer.software_genre News aggregator Proof of concept 020204 information systems 0202 electrical engineering electronic engineering information engineering 020201 artificial intelligence & image processing Function (engineering) business Protocol (object-oriented programming) computer media_common Computer network Backdoor |
Zdroj: | SACMAT '21: The 26th ACM Symposium on Access Control Models and Technologies SACMAT '21: The 26th ACM Symposium on Access Control Models and Technologies, Jun 2021, Barcelona, Spain. pp.115-124, ⟨10.1145/3450569.3463572⟩ SACMAT |
DOI: | 10.1145/3450569.3463572⟩ |
Popis: | International audience; Secure aggregation protocols allow anaggregator to compute the sum of multiple users' data in a privacy-preserving manner. Existing protocols assume that users from whom the data is collected, are fully trusted on the correctness of their individual inputs. We believe that this assumption is too strong, for example when such protocols are used for federated learning whereby the aggregator receives all users' contributions and aggregate them to train and obtain the joint model. A malicious user contributing with incorrect inputs can generate model poisoning or backdoor injection attacks without being detected. In this paper, we propose the first secure aggregation protocol that considers users as potentially malicious. This new protocol enables the correct computation of the aggregate result, in a privacy preserving manner, only if individual inputs belong to a legitimate interval. To this aim, the solution uses a newly designed oblivious programmable pseudo-random function. We validate our solution as a proof of concept under a federated learning scenario whereby potential backdoor injection attacks exist. |
Databáze: | OpenAIRE |
Externí odkaz: |