Zobrazeno 1 - 10
of 290
pro vyhledávání: '"Funkhouser, Thomas A."'
Autor:
Xiao, Zihao, Jing, Longlong, Wu, Shangxuan, Zhu, Alex Zihao, Ji, Jingwei, Jiang, Chiyu Max, Hung, Wei-Chih, Funkhouser, Thomas, Kuo, Weicheng, Angelova, Anelia, Zhou, Yin, Sheng, Shiwei
3D panoptic segmentation is a challenging perception task, especially in autonomous driving. It aims to predict both semantic and instance annotations for 3D points in a scene. Although prior 3D panoptic segmentation approaches have achieved great pe
Externí odkaz:
http://arxiv.org/abs/2401.02402
Autor:
Lan, Yushi, Tan, Feitong, Qiu, Di, Xu, Qiangeng, Genova, Kyle, Huang, Zeng, Fanello, Sean, Pandey, Rohit, Funkhouser, Thomas, Loy, Chen Change, Zhang, Yinda
We present a novel framework for generating photorealistic 3D human head and subsequently manipulating and reposing them with remarkable flexibility. The proposed approach leverages an implicit function representation of 3D human heads, employing 3D
Externí odkaz:
http://arxiv.org/abs/2312.03763
Autor:
Wu, Jimmy, Antonova, Rika, Kan, Adam, Lepert, Marion, Zeng, Andy, Song, Shuran, Bohg, Jeannette, Rusinkiewicz, Szymon, Funkhouser, Thomas
For a robot to personalize physical assistance effectively, it must learn user preferences that can be generally reapplied to future scenarios. In this work, we investigate personalization of household cleanup with robots that can tidy up rooms by pi
Externí odkaz:
http://arxiv.org/abs/2305.05658
Removing clutter from scenes is essential in many applications, ranging from privacy-concerned content filtering to data augmentation. In this work, we present an automatic system that removes clutter from 3D scenes and inpaints with coherent geometr
Externí odkaz:
http://arxiv.org/abs/2304.03763
Autor:
Yu, Hong-Xing, Guo, Michelle, Fathi, Alireza, Chang, Yen-Yu, Chan, Eric Ryan, Gao, Ruohan, Funkhouser, Thomas, Wu, Jiajun
Publikováno v:
Transactions on Machine Learning Research (TMLR), 2023
Photorealistic object appearance modeling from 2D images is a constant topic in vision and graphics. While neural implicit methods (such as Neural Radiance Fields) have shown high-fidelity view synthesis results, they cannot relight the captured obje
Externí odkaz:
http://arxiv.org/abs/2303.06138
Autor:
Zhang, Xiaoshuai, Kundu, Abhijit, Funkhouser, Thomas, Guibas, Leonidas, Su, Hao, Genova, Kyle
We address efficient and structure-aware 3D scene representation from images. Nerflets are our key contribution -- a set of local neural radiance fields that together represent a scene. Each nerflet maintains its own spatial position, orientation, an
Externí odkaz:
http://arxiv.org/abs/2303.03361
Autor:
Yang, Guandao, Benaim, Sagie, Jampani, Varun, Genova, Kyle, Barron, Jonathan T., Funkhouser, Thomas, Hariharan, Bharath, Belongie, Serge
Neural fields have emerged as a new paradigm for representing signals, thanks to their ability to do it compactly while being easy to optimize. In most applications, however, neural fields are treated like black boxes, which precludes many signal man
Externí odkaz:
http://arxiv.org/abs/2302.04862
Autor:
Peng, Songyou, Genova, Kyle, Jiang, Chiyu "Max", Tagliasacchi, Andrea, Pollefeys, Marc, Funkhouser, Thomas
Traditional 3D scene understanding approaches rely on labeled 3D datasets to train a model for a single task with supervision. We propose OpenScene, an alternative approach where a model predicts dense features for 3D scene points that are co-embedde
Externí odkaz:
http://arxiv.org/abs/2211.15654
We present a generative approach to forecast long-term future human behavior in 3D, requiring only weak supervision from readily available 2D human action data. This is a fundamental task enabling many downstream applications. The required ground-tru
Externí odkaz:
http://arxiv.org/abs/2211.14309
Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize images of 3D scenes from novel views. However, they rely upon specialized volumetric rendering algorithms based on ray marching that are mismatched to the capabilities of
Externí odkaz:
http://arxiv.org/abs/2208.00277