Increasing momentum-like factors: A method for reducing training errors on multiple GPUs

Autor: Zhiquan Lai, Zhaoning Zhang, Zhigang Kan, Dongsheng Li, Lujia Yin, Yu Tang, Linbo Qiao
Rok vydání: 2022
Předmět:
Zdroj: Tsinghua Science and Technology. 27:114-126
ISSN: 1007-0214
Popis: In distributed training, increasing batch size can improve parallelism, but it can also bring many difficulties to the training process and cause training errors. In this work, we investigate the occurrence of training errors in theory and train ResNet-50 on CIFAR-10 by using Stochastic Gradient Descent (SGD) and Adaptive moment estimation (Adam) while keeping the total batch size in the parameter server constant and lowering the batch size on each Graphics Processing Unit (GPU). A new method that considers momentum to eliminate training errors in distributed training is proposed. We define a Momentum-like Factor (MF) to represent the influence of former gradients on parameter updates in each iteration. Then, we modify the MF values and conduct experiments to explore how different MF values influence the training performance based on SGD, Adam, and Nesterov accelerated gradient. Experimental results reveal that increasing MFs is a reliable method for reducing training errors in distributed training. The analysis of convergent conditions in distributed training with consideration of a large batch size and multiple GPUs is presented in this paper.
Databáze: OpenAIRE