A Distributed Computing Framework Based on Variance Reduction Method to Accelerate Training Machine Learning Models
Autor: | Yuan Yuan, Jinyan Qiu, Yuxing Peng, Feng Liu, Hangjun Zhou, Dongsheng Li, Changjian Wang, Mingxing Tang, Zhen Huang |
---|---|
Rok vydání: | 2020 |
Předmět: |
Scheme (programming language)
business.industry Computer science Stochastic process Process (computing) Approximation algorithm 02 engineering and technology 010501 environmental sciences Solver Machine learning computer.software_genre 01 natural sciences 0202 electrical engineering electronic engineering information engineering Symmetric matrix 020201 artificial intelligence & image processing Variance reduction Artificial intelligence business computer 0105 earth and related environmental sciences computer.programming_language |
Zdroj: | 2020 IEEE International Conference on Joint Cloud Computing. |
DOI: | 10.1109/jcc49151.2020.00014 |
Popis: | To support large-scale intelligent applications, distributed machine learning based on JointCloud is an intuitive solution scheme. However, the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly, which highly demand on computing and memory resources. To overcome the challenges, we propose a computing framework for L-BFGS optimization algorithm based on variance reduction method, which can utilize a fixed big learning rate to linearly accelerate the convergence speed. To validate our claims, we have conducted several experiments on multiple classical datasets. Experimental results show that the computing framework accelerate the training process of solver and obtain accurate results for machine learning algorithms. |
Databáze: | OpenAIRE |
Externí odkaz: |