Model Segmentation for Storage Efficient Private Federated Learning with Top $r$ Sparsification
Autor: | Vithana, Sajani, Ulukus, Sennur |
---|---|
Rok vydání: | 2022 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | In federated learning (FL) with top $r$ sparsification, millions of users collectively train a machine learning (ML) model locally, using their personal data by only communicating the most significant $r$ fraction of updates to reduce the communication cost. It has been shown that the values as well as the indices of these selected (sparse) updates leak information about the users' personal data. In this work, we investigate different methods to carry out user-database communications in FL with top $r$ sparsification efficiently, while guaranteeing information theoretic privacy of users' personal data. These methods incur considerable storage cost. As a solution, we present two schemes with different properties that use MDS coded storage along with a model segmentation mechanism to reduce the storage cost at the expense of a controllable amount of information leakage, to perform private FL with top $r$ sparsification. Comment: arXiv admin note: text overlap with arXiv:2212.09704 |
Databáze: | arXiv |
Externí odkaz: |