Zobrazeno 1 - 10
of 14
pro vyhledávání: '"Liu, Fangfu"'
Autor:
Liu, Fangfu, Sun, Wenqiang, Wang, Hanyang, Wang, Yikai, Sun, Haowen, Ye, Junliang, Zhang, Jun, Duan, Yueqi
Advancements in 3D scene reconstruction have transformed 2D images from the real world into 3D models, producing realistic 3D results from hundreds of input photos. Despite great success in dense-view reconstruction scenarios, rendering a detailed sc
Externí odkaz:
http://arxiv.org/abs/2408.16767
We are living in a flourishing era of digital media, where everyone has the potential to become a personal filmmaker. Current research on cinematic transfer empowers filmmakers to reproduce and manipulate the visual elements (e.g., cinematography and
Externí odkaz:
http://arxiv.org/abs/2408.12601
In recent years, there has been rapid development in 3D generation models, opening up new possibilities for applications such as simulating the dynamic movements of 3D objects and customizing their behaviors. However, current 3D generative models ten
Externí odkaz:
http://arxiv.org/abs/2406.04338
Autor:
Wu, Kailu, Liu, Fangfu, Cai, Zhihan, Yan, Runjie, Wang, Hanyang, Hu, Yating, Duan, Yueqi, Ma, Kaisheng
In this work, we introduce Unique3D, a novel image-to-3D framework for efficiently generating high-quality 3D meshes from single-view images, featuring state-of-the-art generation fidelity and strong generalizability. Previous methods based on Score
Externí odkaz:
http://arxiv.org/abs/2405.20343
Autor:
Ye, Junliang, Liu, Fangfu, Li, Qixiu, Wang, Zhengyi, Wang, Yikai, Wang, Xinzhou, Duan, Yueqi, Zhu, Jun
3D content creation from text prompts has shown remarkable success recently. However, current text-to-3D methods often generate 3D results that do not align well with human preferences. In this paper, we present a comprehensive framework, coined Drea
Externí odkaz:
http://arxiv.org/abs/2403.14613
Recent years have witnessed the strong power of 3D generation models, which offer a new level of creative flexibility by allowing users to guide the 3D content generation process through a single image or natural language. However, it remains challen
Externí odkaz:
http://arxiv.org/abs/2403.09625
Recently, 3D content creation from text prompts has demonstrated remarkable progress by utilizing 2D and 3D diffusion models. While 3D diffusion models ensure great multi-view consistency, their ability to generate high-quality and diverse 3D assets
Externí odkaz:
http://arxiv.org/abs/2312.06655
Robotic grasping faces new challenges in human-robot-interaction scenarios. We consider the task that the robot grasps a target object designated by human's language directives. The robot not only needs to locate a target based on vision-and-language
Externí odkaz:
http://arxiv.org/abs/2308.00640
Discovering causal structure from purely observational data (i.e., causal discovery), aiming to identify causal relationships among variables, is a fundamental task in machine learning. The recent invention of differentiable score-based DAG learners
Externí odkaz:
http://arxiv.org/abs/2306.02822
In this paper, we aim to learn a semantic radiance field from multiple scenes that is accurate, efficient and generalizable. While most existing NeRFs target at the tasks of neural scene rendering, image synthesis and multi-view reconstruction, there
Externí odkaz:
http://arxiv.org/abs/2303.13014