PaCKD: Pattern-Clustered Knowledge Distillation for Compressing Memory Access Prediction Models

Autor: Gupta, Neelesh, Zhang, Pengmiao, Kannan, Rajgopal, Prasanna, Viktor
Rok vydání: 2024
Předmět:
Zdroj: 2023 IEEE High Performance Extreme Computing Conference (HPEC), 2023, pp. 1-7
Druh dokumentu: Working Paper
DOI: 10.1109/HPEC58863.2023.10363610
Popis: Deep neural networks (DNNs) have proven to be effective models for accurate Memory Access Prediction (MAP), a critical task in mitigating memory latency through data prefetching. However, existing DNN-based MAP models suffer from the challenges such as significant physical storage space and poor inference latency, primarily due to their large number of parameters. These limitations render them impractical for deployment in real-world scenarios. In this paper, we propose PaCKD, a Pattern-Clustered Knowledge Distillation approach to compress MAP models while maintaining the prediction performance. The PaCKD approach encompasses three steps: clustering memory access sequences into distinct partitions involving similar patterns, training large pattern-specific teacher models for memory access prediction for each partition, and training a single lightweight student model by distilling the knowledge from the trained pattern-specific teachers. We evaluate our approach on LSTM, MLP-Mixer, and ResNet models, as they exhibit diverse structures and are widely used for image classification tasks in order to test their effectiveness in four widely used graph applications. Compared to the teacher models with 5.406M parameters and an F1-score of 0.4626, our student models achieve a 552$\times$ model size compression while maintaining an F1-score of 0.4538 (with a 1.92% performance drop). Our approach yields an 8.70% higher result compared to student models trained with standard knowledge distillation and an 8.88% higher result compared to student models trained without any form of knowledge distillation.
Comment: 6 pages, 2 figures, HPEC '23
Databáze: arXiv