Zobrazeno 1 - 10
of 40
pro vyhledávání: '"Shaofeng Zhao"'
Autor:
Yuanzhe Dong, Xi Tang, Qingge Li, Yingying Wang, Naifu Jiang, Lan Tian, Yue Zheng, Xiangxin Li, Shaofeng Zhao, Guanglin Li, Peng Fang
Publikováno v:
IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol 31, Pp 3524-3534 (2023)
Electroencephalogram (EEG) recordings often contain artifacts that would lower signal quality. Many efforts have been made to eliminate or at least minimize the artifacts, and most of them rely on visual inspection and manual operations, which is tim
Externí odkaz:
https://doaj.org/article/37b2973c9af14b4ca635a7a77e46fe1f
Publikováno v:
Frontiers in Neuroscience, Vol 16 (2022)
Virtual reality has demonstrated its analgesic effectiveness. However, its optimal interactive mode for pain relief is yet unclear, with rare objective measurements that were performed to explore its neural mechanism.ObjectiveThis study primarily aim
Externí odkaz:
https://doaj.org/article/41423123e6374d509a0a27fef96f62c3
Publikováno v:
IEEE Access, Vol 9, Pp 57075-57088 (2021)
Large-scale distributed deep learning is of great importance in various applications. For data-parallel distributed training systems, limited hardware resources (e.g., GPU memory and interconnection bandwidth) often become a performance bottleneck, a
Externí odkaz:
https://doaj.org/article/122da6eca8484cccb868d254fe7074c6
Publikováno v:
International Journal of Internet Manufacturing and Services. 10:1
Autor:
Shangjun Lu, Xiaoxia Du, Juan Liu, Yu-Mei Zhang, Shaofeng Zhao, Rongfeng Su, Lan Wang, Nan Yan
Publikováno v:
2022 International Conference on Asian Language Processing (IALP).
Publikováno v:
IEEE Access, Vol 9, Pp 57075-57088 (2021)
Large-scale distributed deep learning is of great importance in various applications. For data-parallel distributed training systems, limited hardware resources (e.g., GPU memory and interconnection bandwidth) often become a performance bottleneck, a
Publikováno v:
Biomedical Signal Processing and Control. 79:103983
Publikováno v:
International Journal of Environmental Technology and Management. 1:1
Publikováno v:
Green, Pervasive, and Cloud Computing ISBN: 9783030642426
GPC
GPC
Large-scale distributed deep learning is of great importance in various applications. For distributed training, the inter-node gradient communication often becomes the performance bottleneck. Gradient sparsification has been proposed to reduce the co
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::0846f73ccf98471a43182a2c1569db2c
https://doi.org/10.1007/978-3-030-64243-3_6
https://doi.org/10.1007/978-3-030-64243-3_6
Publikováno v:
ACM Transactions on Architecture and Code Optimization. 15:1-26
Due to the popularity of Deep Neural Network (DNN) models, we have witnessed extreme-scale DNN models with the continued increase of the scale in terms of depth and width. However, the extremely high memory requirements for them make it difficult to