Zobrazeno 1 - 10
of 210
pro vyhledávání: '"Ding, Siyu"'
High-fidelity simulations of mixing and combustion processes are generally computationally demanding and time-consuming, hindering their wide application in industrial design and optimization. The present study proposes parametric reduced order model
Externí odkaz:
http://arxiv.org/abs/2308.14566
In advanced aero-propulsion engines, kerosene is often injected into the combustor at supercritical pressures, where flow dynamics is distinct from the subcritical counterpart. Large-eddy simulation combined with real-fluid thermodynamics and transpo
Externí odkaz:
http://arxiv.org/abs/2306.17106
Publikováno v:
Shuitu Baochi Xuebao, Vol 38, Iss 3, Pp 345-355 (2024)
[Objective] The study aimed to explore the effects of aeration on crop growth characteristics and soil environmental characteristics in the root zone under reduced topdressing conditions. [Methods] The present paper took the field cultivated tomato i
Externí odkaz:
https://doaj.org/article/d1d7b1a9abb54da09c5bafde180b1b3d
Autor:
RONG Chunyu, HONG Dongni, WANG Baoyue, WANG Junwei, WANG Yunmeng, LI Xianglong, DING Siyu, ZHOU Ping
Publikováno v:
Shanghai yufang yixue, Vol 36, Iss 5, Pp 504-510 (2024)
With the development of digital technology, an increasing number of artificial intelligence (AI) technologies are being applied in the field of public health, significantly improving the efficiency of healthcare systems. However, such techn
Externí odkaz:
https://doaj.org/article/f6838398d7424caeb2cbc1c266d03023
Autor:
Xiang, Yang, Wu, Zhihua, Gong, Weibao, Ding, Siyu, Mo, Xianjie, Liu, Yuang, Wang, Shuohuan, Liu, Peng, Hou, Yongshuai, Li, Long, Wang, Bin, Shi, Shaohuai, Han, Yaqian, Yu, Yue, Li, Ge, Sun, Yu, Ma, Yanjun, Yu, Dianhai
The ever-growing model size and scale of compute have attracted increasing interests in training deep learning models over multiple nodes. However, when it comes to training on cloud clusters, especially across remote clusters, huge challenges are fa
Externí odkaz:
http://arxiv.org/abs/2205.09470
Autor:
Chen, Xu, Li, Danqing, Su, Qi, Ling, Xing, Yang, Yanyan, Liu, Yuhang, Zhu, Xinjie, He, Anqi, Ding, Siyu, Xu, Runxiao, Liu, Zhaoxia, Long, Xiaojun, Zhang, Jinping, Yang, Zhihui, Qi, Yitao, Wu, Hongmei
Publikováno v:
In Journal of Biological Chemistry October 2024 300(10)
Autor:
Wang, Shuohuan, Sun, Yu, Xiang, Yang, Wu, Zhihua, Ding, Siyu, Gong, Weibao, Feng, Shikun, Shang, Junyuan, Zhao, Yanbin, Pang, Chao, Liu, Jiaxiang, Chen, Xuyi, Lu, Yuxiang, Liu, Weixin, Wang, Xi, Bai, Yangfan, Chen, Qiuliang, Zhao, Li, Li, Shiyong, Sun, Peng, Yu, Dianhai, Ma, Yanjun, Tian, Hao, Wu, Hua, Wu, Tian, Zeng, Wei, Li, Ge, Gao, Wen, Wang, Haifeng
Pre-trained language models have achieved state-of-the-art results in various Natural Language Processing (NLP) tasks. GPT-3 has shown that scaling up pre-trained language models can further exploit their enormous potential. A unified framework named
Externí odkaz:
http://arxiv.org/abs/2112.12731
Publikováno v:
In Nurse Education Today June 2024 137
Autor:
Sun, Yu, Wang, Shuohuan, Feng, Shikun, Ding, Siyu, Pang, Chao, Shang, Junyuan, Liu, Jiaxiang, Chen, Xuyi, Zhao, Yanbin, Lu, Yuxiang, Liu, Weixin, Wu, Zhihua, Gong, Weibao, Liang, Jianzhong, Shang, Zhizhou, Sun, Peng, Liu, Wei, Ouyang, Xuan, Yu, Dianhai, Tian, Hao, Wu, Hua, Wang, Haifeng
Pre-trained models have achieved state-of-the-art results in various Natural Language Processing (NLP) tasks. Recent works such as T5 and GPT-3 have shown that scaling up pre-trained language models can improve their generalization abilities. Particu
Externí odkaz:
http://arxiv.org/abs/2107.02137
Transformers are not suited for processing long documents, due to their quadratically increasing memory and time consumption. Simply truncating a long document or applying the sparse attention mechanism will incur the context fragmentation problem or
Externí odkaz:
http://arxiv.org/abs/2012.15688