Zobrazeno 1 - 10
of 1 022
pro vyhledávání: '"Dai Tao"'
Publikováno v:
Yuanzineng kexue jishu, Vol 56, Iss 1, Pp 136-145 (2022)
Chinese Fusion Engineering Test Reactor (CFETR) has been proposed to bridge the technology gap between the International Thermonuclear Experimental Reactor (ITER) and the Fusion Demonstration Reactor (DEMO). As the most crucial nuclear component, fus
Externí odkaz:
https://doaj.org/article/b6f1f380fb5943638d6b44a241af8307
Recently, image-to-3D approaches have significantly advanced the generation quality and speed of 3D assets based on large reconstruction models, particularly 3D Gaussian reconstruction models. Existing large 3D Gaussian models directly map 2D image t
Externí odkaz:
http://arxiv.org/abs/2408.10935
High-resolution point clouds~(HRPCD) anomaly detection~(AD) plays a critical role in precision machining and high-end equipment manufacturing. Despite considerable 3D-AD methods that have been proposed recently, they still cannot meet the requirement
Externí odkaz:
http://arxiv.org/abs/2408.04604
Transferable targeted adversarial attacks aim to mislead models into outputting adversary-specified predictions in black-box scenarios. Recent studies have introduced \textit{single-target} generative attacks that train a generator for each target cl
Externí odkaz:
http://arxiv.org/abs/2407.10179
The pre-trained point cloud model based on Masked Point Modeling (MPM) has exhibited substantial improvements across various tasks. However, two drawbacks hinder their practical application. Firstly, the positional embedding of masked patches in the
Externí odkaz:
http://arxiv.org/abs/2407.09344
Dataset distillation is an emerging dataset reduction method, which condenses large-scale datasets while maintaining task accuracy. Current methods have integrated parameterization techniques to boost synthetic dataset performance by shifting the opt
Externí odkaz:
http://arxiv.org/abs/2406.05704
Transformer-based and MLP-based methods have emerged as leading approaches in time series forecasting (TSF). While Transformer-based methods excel in capturing long-range dependencies, they suffer from high computational complexities and tend to over
Externí odkaz:
http://arxiv.org/abs/2406.03751
Autor:
Yang, Jiarui, Dai, Tao, Li, Naiqi, Wu, Junxi, Liu, Peiyuan, Li, Jinmin, Bao, Jigang, Zhang, Haigang, Xia, Shutao
In recent years, generative pre-trained paradigms such as Large Language Models (LLMs) and Large Vision Models (LVMs) have achieved revolutionary advancements and widespread real-world applications. Particularly, the emergence of pre-trained LLMs-bas
Externí odkaz:
http://arxiv.org/abs/2406.02212
Autor:
Zha, Yaohua, Li, Naiqi, Wang, Yanzi, Dai, Tao, Guo, Hang, Chen, Bin, Wang, Zhi, Ouyang, Zhihao, Xia, Shu-Tao
The pre-trained point cloud model based on Masked Point Modeling (MPM) has exhibited substantial improvements across various tasks. However, these models heavily rely on the Transformer, leading to quadratic complexity and limited decoder, hindering
Externí odkaz:
http://arxiv.org/abs/2405.17149
Autor:
Qin, Shiyu, Wang, Jinpeng, Zhou, Yimin, Chen, Bin, Luo, Tianci, An, Baoyi, Dai, Tao, Xia, Shutao, Wang, Yaowei
Learned visual compression is an important and active task in multimedia. Existing approaches have explored various CNN- and Transformer-based designs to model content distribution and eliminate redundancy, where balancing efficacy (i.e., rate-distor
Externí odkaz:
http://arxiv.org/abs/2405.15413