DGEMM on Integer Matrix Multiplication Unit

Autor: Ootomo, Hiroyuki, Ozaki, Katsuhisa, Yokota, Rio
Rok vydání: 2023
Předmět:
Druh dokumentu: Working Paper
Popis: Deep learning hardware achieves high throughput and low power consumption by reducing computing precision and specializing in matrix multiplication. For machine learning inference, fixed-point value computation is commonplace, where the input and output values and the model parameters are quantized. Thus, many processors are now equipped with fast integer matrix multiplication units (IMMU). It is of significant interest to find a way to harness these IMMUs to improve the performance of HPC applications while maintaining accuracy. We focus on the Ozaki scheme, which computes a high-precision matrix multiplication by using lower-precision computing units, and show the advantages and disadvantages of using IMMU. The experiment using integer Tensor Cores shows that we can compute double-precision matrix multiplication faster than cuBLAS and an existing Ozaki scheme implementation on FP16 Tensor Cores on NVIDIA consumer GPUs. Furthermore, we demonstrate accelerating a quantum circuit simulation by up to 4.33 while maintaining the FP64 accuracy.
Comment: Accepted by IJHPCA: https://journals.sagepub.com/doi/10.1177/10943420241239588
Databáze: arXiv