Zobrazeno 1 - 10
of 396
pro vyhledávání: '"WANG Zedong"'
Publikováno v:
Meikuang Anquan, Vol 53, Iss 11, Pp 76-82 (2022)
In order to solve the problem of difficult extraction of low permeability high gas coal seams, based on the existing research results, a coal seam gas-liquid composite fracturing pressure relief and permeability enhancement technology was proposed th
Externí odkaz:
https://doaj.org/article/38c87db6ea2a4c628392303e970467fd
Autor:
Li, Siyuan, Tian, Juanxi, Wang, Zedong, Zhang, Luyuan, Liu, Zicheng, Jin, Weiyang, Liu, Yang, Sun, Baigui, Li, Stan Z.
This paper delves into the interplay between vision backbones and optimizers, unvealing an inter-dependent phenomenon termed \textit{\textbf{b}ackbone-\textbf{o}ptimizer \textbf{c}oupling \textbf{b}ias} (BOCB). We observe that canonical CNNs, such as
Externí odkaz:
http://arxiv.org/abs/2410.06373
Publikováno v:
Italian Journal of Animal Science, Vol 20, Iss 1, Pp 1656-1663 (2021)
This study was conducted to investigate the effects of guanidinoacetic acid (GAA) and betaine on growth performance, meat quality and metabolism of ducks. A total of 384-one-day-old Cherry Valley meat ducks (55.75 ± 0.55 g) were randomly assigned to
Externí odkaz:
https://doaj.org/article/0005a8c4222847388457761ff995e9f3
Autor:
Jin, Xin, Zhu, Hongyu, Li, Siyuan, Wang, Zedong, Liu, Zicheng, Yu, Chang, Qin, Huafeng, Li, Stan Z.
As Deep Neural Networks have achieved thrilling breakthroughs in the past decade, data augmentations have garnered increasing attention as regularization techniques when massive labeled data are unavailable. Among existing augmentations, Mixup and re
Externí odkaz:
http://arxiv.org/abs/2409.05202
To mitigate the computational complexity in the self-attention mechanism on long sequences, linear attention utilizes computation tricks to achieve linear complexity, while state space models (SSMs) popularize a favorable practice of using non-data-d
Externí odkaz:
http://arxiv.org/abs/2406.08128
Autor:
Li, Siyuan, Wang, Zedong, Liu, Zicheng, Wu, Di, Tan, Cheng, Zheng, Jiangbin, Huang, Yufei, Li, Stan Z.
Similar to natural language models, pre-trained genome language models are proposed to capture the underlying intricacies within genomes with unsupervised sequence modeling. They have become essential tools for researchers and practitioners in biolog
Externí odkaz:
http://arxiv.org/abs/2405.10812
Transformer models have been successful in various sequence processing tasks, but the self-attention mechanism's computational cost limits its practicality for long sequences. Although there are existing attention variants that improve computational
Externí odkaz:
http://arxiv.org/abs/2404.11163
Autor:
Li, Siyuan, Liu, Zicheng, Tian, Juanxi, Wang, Ge, Wang, Zedong, Jin, Weiyang, Wu, Di, Tan, Cheng, Lin, Tao, Liu, Yang, Sun, Baigui, Li, Stan Z.
Exponential Moving Average (EMA) is a widely used weight averaging (WA) regularization to learn flat optima for better generalizations without extra cost in deep neural network (DNN) optimization. Despite achieving better flatness, existing WA method
Externí odkaz:
http://arxiv.org/abs/2402.09240
Autor:
Li, Siyuan, Zhang, Luyuan, Wang, Zedong, Wu, Di, Wu, Lirong, Liu, Zicheng, Xia, Jun, Tan, Cheng, Liu, Yang, Sun, Baigui, Li, Stan Z.
As the deep learning revolution marches on, self-supervised learning has garnered increasing attention in recent years thanks to its remarkable representation learning ability and the low dependence on labeled data. Among these varied self-supervised
Externí odkaz:
http://arxiv.org/abs/2401.00897
Semi-supervised learning (SSL) has witnessed great progress with various improvements in the self-training framework with pseudo labeling. The main challenge is how to distinguish high-quality pseudo labels against the confirmation bias. However, exi
Externí odkaz:
http://arxiv.org/abs/2310.03013