HyPar: Towards Hybrid Parallelism for Deep Learning Accelerator Array
Autor: | Jiachen Mao, Yiran Chen, Hai Li, Xuehai Qian, Youwei Zhuo, Linghao Song |
---|---|
Jazyk: | angličtina |
Rok vydání: | 2019 |
Předmět: |
010302 applied physics
FOS: Computer and information sciences Computer Science - Machine Learning Artificial neural network Computer science business.industry Dataflow Deep learning Computer Science::Neural and Evolutionary Computation 02 engineering and technology 01 natural sciences 020202 computer hardware & architecture Machine Learning (cs.LG) Dynamic programming Kernel (linear algebra) Computer engineering Computer Science - Distributed Parallel and Cluster Computing 0103 physical sciences 0202 electrical engineering electronic engineering information engineering Parallelism (grammar) Hardware acceleration Artificial intelligence Distributed Parallel and Cluster Computing (cs.DC) business Throughput (business) |
Zdroj: | HPCA |
Popis: | With the rise of artificial intelligence in recent years, Deep Neural Networks (DNNs) have been widely used in many domains. To achieve high performance and energy efficiency, hardware acceleration (especially inference) of DNNs is intensively studied both in academia and industry. However, we still face two challenges: large DNN models and datasets, which incur frequent off-chip memory accesses; and the training of DNNs, which is not well-explored in recent accelerator designs. To truly provide high throughput and energy efficient acceleration for the training of deep and large models, we inevitably need to use multiple accelerators to explore the coarse-grain parallelism, compared to the fine-grain parallelism inside a layer considered in most of the existing architectures. It poses the key research question to seek the best organization of computation and dataflow among accelerators. In this paper, inspired by recent work in machine learning systems, we propose a solution HyPar to determine layer-wise parallelism for deep neural network training with an array of DNN accelerators. HyPar partitions the feature map tensors (input and output), the kernel tensors, the gradient tensors, and the error tensors for the DNN accelerators. A partition constitutes the choice of parallelism for weighted layers. The optimization target is to search a partition that minimizes the total communication during training a complete DNN. To solve this problem, we propose a communication model to explain the source and amount of communications. Then, we use a hierarchical layer-wise dynamic programming method to search for the partition for each layer. To appear in the 2019 25th International Symposium on High-Performance Computer Architecture (HPCA 2019) |
Databáze: | OpenAIRE |
Externí odkaz: |