Autor: |
Johnsirani Venkatesan, Nikitha, Nam, ChoonSung, Shin, Dong Ryeol |
Předmět: |
|
Zdroj: |
IETE Technical Review; Mar2019, Vol. 36 Issue 2, p164-177, 14p |
Abstrakt: |
Deep Learning is undergoing intense study as it continues to attain outstanding results because of multiple levels of abstractions. Currently, deep learning algorithms are trained and executed on a single machine with multiple Graphics Processing Units (GPUs). To achieve better results, a large amount of data need to be trained in deep neural network with expensive GPUs and General Purpose GPU (GPGPUs). Training complex deep learning models take weeks and sometimes even months. In order to converge faster without comprising in accuracy percentage, distributed and parallel nature of Apache Spark comes into play. Spark has an advantage of in-memory and fast processing of data. Recent researchers involve the integration of deep learning and Apache Spark to exploit computation power and scalability. In this paper, all the recent deep learning frameworks are reviewed exhaustively with a detailed assessment and comparison. Experimentation results are provided to evaluate which frameworks are more suitable for deep models. We also discussed the open issues related to each deep learning framework. [ABSTRACT FROM AUTHOR] |
Databáze: |
Complementary Index |
Externí odkaz: |
|