Zobrazeno 1 - 10
of 458
pro vyhledávání: '"Fu Yanwei"'
Autor:
Wang Xinyu, Ma Wanzhuo, Fu Yanwei, Liu Xianzhu, Tao Zonghui, Song Yansong, Dong Keyan, Jiang Huilin
Publikováno v:
Nanophotonics, Vol 12, Iss 12, Pp 2073-2101 (2023)
With the development of all-optical networks, all-optical devices have become a research hotspot in recent years. Two-dimensional materials, represented by graphene and black phosphorus, have attracted great interest in the scientific community due t
Externí odkaz:
https://doaj.org/article/42f57aefa9d440c482cfca556823e7e5
Autor:
Yu, Junqiu, Ren, Xinlin, Gu, Yongchong, Lin, Haitao, Wang, Tianyu, Zhu, Yi, Xu, Hang, Jiang, Yu-Gang, Xue, Xiangyang, Fu, Yanwei
Language-guided robotic grasping is a rapidly advancing field where robots are instructed using human language to grasp specific objects. However, existing methods often depend on dense camera views and struggle to quickly update scenes, limiting the
Externí odkaz:
http://arxiv.org/abs/2412.02140
As large-scale diffusion models continue to advance, they excel at producing high-quality images but often generate unwanted content, such as sexually explicit or violent content. Existing methods for concept removal generally guide the image generat
Externí odkaz:
http://arxiv.org/abs/2412.01244
In this paper, we focus on the Ego-Exo Object Correspondence task, an emerging challenge in the field of computer vision that aims to map objects across ego-centric and exo-centric views. We introduce ObjectRelator, a novel method designed to tackle
Externí odkaz:
http://arxiv.org/abs/2411.19083
We introduce MVGenMaster, a multi-view diffusion model enhanced with 3D priors to address versatile Novel View Synthesis (NVS) tasks. MVGenMaster leverages 3D priors that are warped using metric depth and camera poses, significantly enhancing both ge
Externí odkaz:
http://arxiv.org/abs/2411.16157
Autor:
Jiang, Boyuan, Hu, Xiaobin, Luo, Donghao, He, Qingdong, Xu, Chengming, Peng, Jinlong, Zhang, Jiangning, Wang, Chengjie, Wu, Yunsheng, Fu, Yanwei
Although image-based virtual try-on has made considerable progress, emerging approaches still encounter challenges in producing high-fidelity and robust fitting images across diverse scenarios. These methods often struggle with issues such as texture
Externí odkaz:
http://arxiv.org/abs/2411.10499
While neural networks have made significant strides in many AI tasks, they remain vulnerable to a range of noise types, including natural corruptions, adversarial noise, and low-resolution artifacts. Many existing approaches focus on enhancing robust
Externí odkaz:
http://arxiv.org/abs/2409.18419
Reconstructing 3D visuals from functional Magnetic Resonance Imaging (fMRI) data, introduced as Recon3DMind in our conference work, is of significant interest to both cognitive neuroscience and computer vision. To advance this task, we present the fM
Externí odkaz:
http://arxiv.org/abs/2409.11315
Autor:
Tan, Weipeng, Lin, Chuming, Xu, Chengming, Ji, Xiaozhong, Zhu, Junwei, Wang, Chengjie, Wu, Yunsheng, Fu, Yanwei
Talking Head Generation (THG), typically driven by audio, is an important and challenging task with broad application prospects in various fields such as digital humans, film production, and virtual reality. While diffusion model-based THG methods pr
Externí odkaz:
http://arxiv.org/abs/2409.03270
Novel View Synthesis (NVS) and 3D generation have recently achieved prominent improvements. However, these works mainly focus on confined categories or synthetic 3D assets, which are discouraged from generalizing to challenging in-the-wild scenes and
Externí odkaz:
http://arxiv.org/abs/2408.08000