Zobrazeno 1 - 10
of 35
pro vyhledávání: '"Yang, Bangbang"'
Autor:
Dang, Ziqiang, Dong, Wenqi, Yang, Zesong, Yang, Bangbang, Li, Liang, Ma, Yuewen, Cui, Zhaopeng
In this paper, we present TexPro, a novel method for high-fidelity material generation for input 3D meshes given text prompts. Unlike existing text-conditioned texture generation methods that typically generate RGB textures with baked lighting, TexPr
Externí odkaz:
http://arxiv.org/abs/2410.15891
Autor:
Li, Renjie, Pan, Panwang, Yang, Bangbang, Xu, Dejia, Zhou, Shijie, Zhang, Xuanyang, Li, Zeming, Kadambi, Achuta, Wang, Zhangyang, Tu, Zhengzhong, Fan, Zhiwen
The blooming of virtual reality and augmented reality (VR/AR) technologies has driven an increasing demand for the creation of high-quality, immersive, and dynamic environments. However, existing generative techniques either focus solely on dynamic o
Externí odkaz:
http://arxiv.org/abs/2406.13527
Autor:
Dong, Wenqi, Yang, Bangbang, Ma, Lin, Liu, Xiao, Cui, Liyuan, Bao, Hujun, Ma, Yuewen, Cui, Zhaopeng
As humans, we aspire to create media content that is both freely willed and readily controlled. Thanks to the prominent development of generative techniques, we now can easily utilize 2D diffusion methods to synthesize images controlled by raw sketch
Externí odkaz:
http://arxiv.org/abs/2405.08054
Autor:
Bao, Chong, Zhang, Yinda, Li, Yuan, Zhang, Xiyu, Yang, Bangbang, Bao, Hujun, Pollefeys, Marc, Zhang, Guofeng, Cui, Zhaopeng
Recently, we have witnessed the explosive growth of various volumetric representations in modeling animatable head avatars. However, due to the diversity of frameworks, there is no practical method to support high-level applications like 3D head avat
Externí odkaz:
http://arxiv.org/abs/2404.02152
Diffusion-based methods have achieved prominent success in generating 2D media. However, accomplishing similar proficiencies for scene-level mesh texturing in 3D spatial applications, e.g., XR/VR, remains constrained, primarily due to the intricate n
Externí odkaz:
http://arxiv.org/abs/2310.13119
Different from traditional video cameras, event cameras capture asynchronous events stream in which each event encodes pixel location, trigger time, and the polarity of the brightness changes. In this paper, we introduce a novel graph-based framework
Externí odkaz:
http://arxiv.org/abs/2308.14419
Publikováno v:
ICCV 2023
Despite the tremendous progress in neural radiance fields (NeRF), we still face a dilemma of the trade-off between quality and efficiency, e.g., MipNeRF presents fine-detailed and anti-aliased renderings but takes days for training, while Instant-ngp
Externí odkaz:
http://arxiv.org/abs/2307.11335
Autor:
Bao, Chong, Zhang, Yinda, Yang, Bangbang, Fan, Tianxing, Yang, Zesong, Bao, Hujun, Zhang, Guofeng, Cui, Zhaopeng
Despite the great success in 2D editing using user-friendly tools, such as Photoshop, semantic strokes, or even text prompts, similar capabilities in 3D areas are still limited, either relying on 3D modeling skills or allowing editing within only a f
Externí odkaz:
http://arxiv.org/abs/2303.13277
Autor:
Yang, Bangbang, Bao, Chong, Zeng, Junyi, Bao, Hujun, Zhang, Yinda, Cui, Zhaopeng, Zhang, Guofeng
Very recently neural implicit rendering techniques have been rapidly evolved and shown great advantages in novel view synthesis and 3D scene reconstruction. However, existing neural rendering methods for editing purposes offer limited functionality,
Externí odkaz:
http://arxiv.org/abs/2207.11911
Autor:
Zhao, Boming, Yang, Bangbang, Li, Zhenyang, Li, Zuoyue, Zhang, Guofeng, Zhao, Jiashu, Yin, Dawei, Cui, Zhaopeng, Bao, Hujun
Expanding an existing tourist photo from a partially captured scene to a full scene is one of the desired experiences for photography applications. Although photo extrapolation has been well studied, it is much more challenging to extrapolate a photo
Externí odkaz:
http://arxiv.org/abs/2207.06899