Sparse GPU Kernels for Deep Learning
Autor: | Matei Zaharia, Trevor Gale, Cliff Young, Erich Elsen |
---|---|
Rok vydání: | 2020 |
Předmět: |
FOS: Computer and information sciences
Computer Science - Machine Learning Computer science MathematicsofComputing_NUMERICALANALYSIS Machine Learning (stat.ML) 02 engineering and technology Parallel computing 010501 environmental sciences 01 natural sciences Machine Learning (cs.LG) Kernel (linear algebra) CUDA Statistics - Machine Learning 0202 electrical engineering electronic engineering information engineering 0105 earth and related environmental sciences Sparse matrix Artificial neural network business.industry Deep learning Matrix multiplication Computer Science - Distributed Parallel and Cluster Computing 020201 artificial intelligence & image processing Multiplication Distributed Parallel and Cluster Computing (cs.DC) Artificial intelligence business |
Zdroj: | SC |
DOI: | 10.1109/sc41405.2020.00021 |
Popis: | Scientific workloads have traditionally exploited high levels of sparsity to accelerate computation and reduce memory requirements. While deep neural networks can be made sparse, achieving practical speedups on GPUs is difficult because these applications have relatively moderate levels of sparsity that are not sufficient for existing sparse kernels to outperform their dense counterparts. In this work, we study sparse matrices from deep learning applications and identify favorable properties that can be exploited to accelerate computation. Based on these insights, we develop high-performance GPU kernels for two sparse matrix operations widely applicable in neural networks: sparse matrix-dense matrix multiplication and sampled dense-dense matrix multiplication. Our kernels reach 27% of single-precision peak on Nvidia V100 GPUs. Using our kernels, we demonstrate sparse Transformer and MobileNet models that achieve 1.2-2.1x speedups and up to 12.8x memory savings without sacrificing accuracy. Updated to match camera-ready for SC20 |
Databáze: | OpenAIRE |
Externí odkaz: |