MOFHEI: Model Optimizing Framework for Fast and Efficient Homomorphically Encrypted Neural Network Inference
Autor: | Ghazvinian, Parsa, Podschwadt, Robert, Panzade, Prajwal, Rafiei, Mohammad H., Takabi, Daniel |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Due to the extensive application of machine learning (ML) in a wide range of fields and the necessity of data privacy, privacy-preserving machine learning (PPML) solutions have recently gained significant traction. One group of approaches relies on Homomorphic Encryption (HE), which enables us to perform ML tasks over encrypted data. However, even with state-of-the-art HE schemes, HE operations are still significantly slower compared to their plaintext counterparts and require a considerable amount of memory. Therefore, we propose MOFHEI, a framework that optimizes the model to make HE-based neural network inference, referred to as private inference (PI), fast and efficient. First, our proposed learning-based method automatically transforms a pre-trained ML model into its compatible version with HE operations, called the HE-friendly version. Then, our iterative block pruning method prunes the model's parameters in configurable block shapes in alignment with the data packing method. This allows us to drop a significant number of costly HE operations, thereby reducing the latency and memory consumption while maintaining the model's performance. We evaluate our framework through extensive experiments on different models using various datasets. Our method achieves up to 98% pruning ratio on LeNet, eliminating up to 93% of the required HE operations for performing PI, reducing latency and the required memory by factors of 9.63 and 4.04, respectively, with negligible accuracy loss. Comment: 10 pages, 5 Figures, IEEE International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications 2024 |
Databáze: | arXiv |
Externí odkaz: |