Zobrazeno 1 - 10
of 170
pro vyhledávání: '"Liu, Jinlan"'
Adam is a commonly used stochastic optimization algorithm in machine learning. However, its convergence is still not fully understood, especially in the non-convex setting. This paper focuses on exploring hyperparameter settings for the convergence o
Externí odkaz:
http://arxiv.org/abs/2307.11782
Publikováno v:
Neural Computation (2024) 36 (9): 1912-1938
Adam-type algorithms have become a preferred choice for optimisation in the deep learning setting, however, despite success, their convergence is still not well understood. To this end, we introduce a unified framework for Adam-type algorithms (calle
Externí odkaz:
http://arxiv.org/abs/2305.05675
Publikováno v:
European Journal of Innovation Management, 2023, Vol. 27, Issue 6, pp. 1864-1884.
Externí odkaz:
http://www.emeraldinsight.com/doi/10.1108/EJIM-08-2022-0417
Publikováno v:
Neurocomputing 527 (2023) 27-35
The stochastic momentum method is a commonly used acceleration technique for solving large-scale stochastic optimization problems in artificial neural networks. Current convergence results of stochastic momentum methods under non-convex stochastic se
Externí odkaz:
http://arxiv.org/abs/2205.14811
Autor:
Li, Mubai, Zhang, Zhenxin, Chen, Siyun, Zhang, Liqiang, Xu, Zhihua, Ren, Xiaoxu, Liu, Jinlan, Sun, Peng
Publikováno v:
In International Journal of Applied Earth Observation and Geoinformation August 2024 132
Autor:
Mao, Yanqi, Yang, Hu, Wang, Guanbo, Xu, Yuncun, Liu, Jinlan, Li, Wenqiong, Li, Qingyu, He, Yun, Yip, SenPo, Liang, Xiaoguang
Publikováno v:
In Chemical Engineering Journal 1 August 2024 493
Publikováno v:
In Neural Networks November 2024 179
Autor:
Liu, Jinlan, Si, Yongfeng, Huang, Xiaoying, Lin, Xinran, Lu, Lingjuan, Wu, Changlin, Guan, Xuan, Liang, Yunsheng
Publikováno v:
In Neuroscience Letters 27 July 2024 836
The plain stochastic gradient descent and momentum stochastic gradient descent have extremely wide applications in deep learning due to their simple settings and low computational complexity. The momentum stochastic gradient descent uses the accumula
Externí odkaz:
http://arxiv.org/abs/2106.06753
Adaptive gradient algorithm (AdaGrad) and its variants, such as RMSProp, Adam, AMSGrad, etc, have been widely used in deep learning. Although these algorithms are faster in the early phase of training, their generalization performance is often not as
Externí odkaz:
http://arxiv.org/abs/2106.06749