Zobrazeno 1 - 10
of 781
pro vyhledávání: '"Wang, Xiaodi"'
Neural networks have continued to gain prevalence in the modern era for their ability to model complex data through pattern recognition and behavior remodeling. However, the static construction of traditional neural networks inhibits dynamic intellig
Externí odkaz:
http://arxiv.org/abs/2408.15462
Autor:
Liu, Jiahe, Wang, Xiaodi
The study of post-wildfire plant regrowth is essential for developing successful ecosystem recovery strategies. Prior research mainly examines key ecological and biogeographical factors influencing post-fire succession. This research proposes a novel
Externí odkaz:
http://arxiv.org/abs/2311.02492
Small CNN-based models usually require transferring knowledge from a large model before they are deployed in computationally resource-limited edge devices. Masked image modeling (MIM) methods achieve great success in various visual tasks but remain l
Externí odkaz:
http://arxiv.org/abs/2309.09571
Pretraining on large-scale datasets can boost the performance of object detectors while the annotated datasets for object detection are hard to scale up due to the high labor cost. What we possess are numerous isolated filed-specific datasets, thus,
Externí odkaz:
http://arxiv.org/abs/2304.03580
Autor:
Jiang, Feng1 (AUTHOR), Wang, Xiaodi2 (AUTHOR), Carmichael, Michael2 (AUTHOR), Chen, Yanfei1 (AUTHOR), Huang, Ruijian1 (AUTHOR), Xiao, Yue1 (AUTHOR), Zhou, Jifang1 (AUTHOR) zjifang@cpu.edu.cn, Su, Cunhua2 (AUTHOR) suteensu@163.com
Publikováno v:
Journal of Cardiothoracic Surgery. 11/14/2024, Vol. 19 Issue 1, p1-12. 12p.
Autor:
Zhang, Xinyu, Chen, Jiahui, Yuan, Junkun, Chen, Qiang, Wang, Jian, Wang, Xiaodi, Han, Shumin, Chen, Xiaokang, Pi, Jimin, Yao, Kun, Han, Junyu, Ding, Errui, Wang, Jingdong
Masked image modeling (MIM) learns visual representation by masking and reconstructing image patches. Applying the reconstruction supervision on the CLIP representation has been proven effective for MIM. However, it is still under-explored how CLIP s
Externí odkaz:
http://arxiv.org/abs/2211.09799
Autor:
Chen, Qiang, Wang, Jian, Han, Chuchu, Zhang, Shan, Li, Zexian, Chen, Xiaokang, Chen, Jiahui, Wang, Xiaodi, Han, Shuming, Zhang, Gang, Feng, Haocheng, Yao, Kun, Han, Junyu, Ding, Errui, Wang, Jingdong
We present a strong object detector with encoder-decoder pretraining and finetuning. Our method, called Group DETR v2, is built upon a vision transformer encoder ViT-Huge~\cite{dosovitskiy2020image}, a DETR variant DINO~\cite{zhang2022dino}, and an e
Externí odkaz:
http://arxiv.org/abs/2211.03594
Autor:
Wang, Yunhao, Sun, Huixin, Wang, Xiaodi, Zhang, Bin, Li, Chao, Xin, Ying, Zhang, Baochang, Ding, Errui, Han, Shumin
Vision Transformer and its variants have demonstrated great potential in various computer vision tasks. But conventional vision transformers often focus on global dependency at a coarse level, which suffer from a learning challenge on global relation
Externí odkaz:
http://arxiv.org/abs/2209.01620
Publikováno v:
In Knowledge-Based Systems 25 November 2024 304
Publikováno v:
In Journal of Materials Research and Technology November-December 2024 33:673-682