Zobrazeno 1 - 10
of 16
pro vyhledávání: '"Yin, Fukun"'
As Artificial Intelligence Generated Content (AIGC) advances, a variety of methods have been developed to generate text, images, videos, and 3D objects from single or multimodal inputs, contributing efforts to emulate human-like cognitive content cre
Externí odkaz:
http://arxiv.org/abs/2408.05477
Autor:
Chen, Sijin, Chen, Xin, Pang, Anqi, Zeng, Xianfang, Cheng, Wei, Fu, Yijun, Yin, Fukun, Wang, Yanru, Wang, Zhibin, Zhang, Chi, Yu, Jingyi, Yu, Gang, Fu, Bin, Chen, Tao
The polygon mesh representation of 3D data exhibits great flexibility, fast rendering speed, and storage efficiency, which is widely preferred in various applications. However, given its unstructured graph representation, the direct generation of hig
Externí odkaz:
http://arxiv.org/abs/2405.20853
Recent advancements in language models have demonstrated their adeptness in conducting multi-turn dialogues and retaining conversational context. However, this proficiency remains largely unexplored in other multimodal generative models, particularly
Externí odkaz:
http://arxiv.org/abs/2404.01700
Autor:
Li, Mingsheng, Chen, Xin, Zhang, Chi, Chen, Sijin, Zhu, Hongyuan, Yin, Fukun, Yu, Gang, Chen, Tao
Recently, 3D understanding has become popular to facilitate autonomous agents to perform further decisionmaking. However, existing 3D datasets and methods are often limited to specific tasks. On the other hand, recent progress in Large Language Model
Externí odkaz:
http://arxiv.org/abs/2312.10763
Autor:
Yin, Fukun, Chen, Xin, Zhang, Chi, Jiang, Biao, Zhao, Zibo, Fan, Jiayuan, Yu, Gang, Li, Taihao, Chen, Tao
The advent of large language models, enabling flexibility through instruction-driven approaches, has revolutionized many traditional generative tasks, but large models for 3D data, particularly in comprehensively handling 3D shapes with other modalit
Externí odkaz:
http://arxiv.org/abs/2311.17618
Autor:
Ding, Yuhan, Yin, Fukun, Fan, Jiayuan, Li, Hui, Chen, Xin, Liu, Wen, Lu, Chongshan, YU, Gang, Chen, Tao
Recent advances in implicit neural representations have achieved impressive results by sampling and fusing individual points along sampling rays in the sampling space. However, due to the explosively growing sampling space, finely representing and sy
Externí odkaz:
http://arxiv.org/abs/2311.01773
Recent advancements in implicit neural representations have contributed to high-fidelity surface reconstruction and photorealistic novel view synthesis. However, the computational complexity inherent in these methodologies presents a substantial impe
Externí odkaz:
http://arxiv.org/abs/2310.14487
Neural Radiance Fields (NeRF) has achieved impressive results in single object scene reconstruction and novel view synthesis, which have been demonstrated on many single modality and single object focused indoor scene datasets like DTU, BMVS, and NeR
Externí odkaz:
http://arxiv.org/abs/2301.06782
Implicit neural 3D representation has achieved impressive results in surface or scene reconstruction and novel view synthesis, which typically uses the coordinate-based multi-layer perceptrons (MLPs) to learn a continuous scene representation. Howeve
Externí odkaz:
http://arxiv.org/abs/2210.11170
Publikováno v:
In Neurocomputing 1 October 2024 600