Autor: |
Li, Xiaohai, Yang, Xiaodong, Zhang, Yingwei, Yang, Jianrong, Chen, Yiqiang |
Zdroj: |
International Journal of Machine Learning & Cybernetics; Nov2024, Vol. 15 Issue 11, p5199-5215, 17p |
Abstrakt: |
Pruning and quantization are among the most widely used techniques for deep learning model compression. Their combined application holds the potential for even greater performance gains. Most existing works combine pruning and quantization sequentially. However, this separation makes it difficult to fully leverage their complementarity and exploit the potential benefits of joint optimization. To address the limitations of existing methods, we propose A-JOPQ (adaptive joint optimization of pruning and quantization), an adaptive joint optimization framework for pruning and quantization. Starting with a deep neural network, A-JOPQ first constructs a pruning network through adaptive mutual learning with a quantization network. This process compensates for the loss of structural information during pruning. Subsequently, the pruning network is incrementally quantized using adaptive multi-teacher knowledge distillation of itself and the original uncompressed model. This approach effectively mitigates the adverse effects of quantization. Finally, A-JOPQ generates a pruning-quantization network that achieves significant model compression while maintaining high accuracy. Extensive experiments conducted on several public datasets demonstrate the superiority of our proposed method. Compared to existing methods, A-JOPQ achieves higher accuracy with a smaller model size. Additionally, we extend A-JOPQ to federated learning (FL) settings. Simulation experiments show that A-JOPQ can enhance FL by enabling resource-limited clients to participate effectively. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|