Zobrazeno 1 - 10
of 625
pro vyhledávání: '"Yuan Jinhui"'
Autor:
Zhang Yifan, Zhao Yifei, Fang Ziwei, Liu Jiantao, Xia Changming, Hou Zhiyun, Zhao Xuesong, Tan Zhongwei, Dong Yi, Zhou Guiyao, Yuan Jinhui
Publikováno v:
Nanophotonics, Vol 13, Iss 6, Pp 891-899 (2024)
The multicore fiber amplifier, as a key component in spatial division multiplexing (SDM) communication systems, presents higher technical difficulty compared to traditional multi-channel single core fiber amplifiers, which has sparked widespread atte
Externí odkaz:
https://doaj.org/article/79de6230e4a149288655e26f03a9e62a
Recently, heterogeneous graph neural networks (HGNNs) have achieved impressive success in representation learning by capturing long-range dependencies and heterogeneity at the node level. However, few existing studies have delved into the utilization
Externí odkaz:
http://arxiv.org/abs/2404.10443
Tensor rematerialization allows the training of deep neural networks (DNNs) under limited memory budgets by checkpointing the models and recomputing the evicted tensors as needed. However, the existing tensor rematerialization techniques overlook the
Externí odkaz:
http://arxiv.org/abs/2311.00591
Publikováno v:
Nanophotonics, Vol 5, Iss 2, Pp 292-315 (2016)
Frequency comb sources have revolutionized metrology and spectroscopy and found applications in many fields. Stable, low-cost, high-quality frequency comb sources are important to these applications. Modeling of the frequency comb sources will help t
Externí odkaz:
https://doaj.org/article/0b88689ce32e4643b993e0f85113858d
Various distributed deep neural network (DNN) training technologies lead to increasingly complicated use of collective communications on GPU. The deadlock-prone collectives on GPU force researchers to guarantee that collectives are enqueued in a cons
Externí odkaz:
http://arxiv.org/abs/2303.06324
Publikováno v:
IEEE Transactions on Parallel and Distributed Systems ( Volume: 35, Issue: 9, September 2024)
As the size of deep learning models gets larger and larger, training takes longer time and more resources, making fault tolerance more and more critical. Existing state-of-the-art methods like CheckFreq and Elastic Horovod need to back up a copy of t
Externí odkaz:
http://arxiv.org/abs/2302.06173
Recent advances in deep learning are driven by the growing scale of computation, data, and models. However, efficiently training large-scale models on distributed systems requires an intricate combination of data, operator, and pipeline parallelism,
Externí odkaz:
http://arxiv.org/abs/2301.06813
Autor:
Li, Zefeng, Yuan, Jinhui, Rao, Lan, Yan, Binbin, Wang, Kuiru, Sang, Xinzhu, Wu, Qiang, Yu, Chongxiu
Publikováno v:
In Photonics and Nanostructures - Fundamentals and Applications September 2024 61
Autor:
Huo, Jingyu, Zeng, Zirong, Yuan, Jinhui, Luo, Minghuo, Luo, Aiping, Li, Jiaming, Yang, Huan, Zhao, Nan, Zhang, Qingmao
Publikováno v:
In Optics and Laser Technology February 2025 181 Part B
Autor:
Li, Zefeng, Qu, Yuwei, Wang, Xin, Wang, Yuheng, Wang, Danian, Guo, Xiaoyue, Xin, Changhua, Qiu, Qian, Rao, Lan, Yuan, Jinhui
Publikováno v:
In Measurement January 2025 242 Part E