Zobrazeno 1 - 10
of 19
pro vyhledávání: '"Wei, Fangyin"'
Autor:
NVIDIA, Bala, Maciej, Cui, Yin, Ding, Yifan, Ge, Yunhao, Hao, Zekun, Hasselgren, Jon, Huffman, Jacob, Jin, Jingyi, Lewis, J. P., Li, Zhaoshuo, Lin, Chen-Hsuan, Lin, Yen-Chen, Lin, Tsung-Yi, Liu, Ming-Yu, Luo, Alice, Ma, Qianli, Munkberg, Jacob, Shi, Stella, Wei, Fangyin, Xiang, Donglai, Xu, Jiashu, Zeng, Xiaohui, Zhang, Qinsheng
We introduce Edify 3D, an advanced solution designed for high-quality 3D asset generation. Our method first synthesizes RGB and surface normal images of the described object at multiple viewpoints using a diffusion model. The multi-view observations
Externí odkaz:
http://arxiv.org/abs/2411.07135
Autor:
NVIDIA, Atzmon, Yuval, Bala, Maciej, Balaji, Yogesh, Cai, Tiffany, Cui, Yin, Fan, Jiaojiao, Ge, Yunhao, Gururani, Siddharth, Huffman, Jacob, Isaac, Ronald, Jannaty, Pooya, Karras, Tero, Lam, Grace, Lewis, J. P., Licata, Aaron, Lin, Yen-Chen, Liu, Ming-Yu, Ma, Qianli, Mallya, Arun, Martino-Tarr, Ashlee, Mendez, Doug, Nah, Seungjun, Pruett, Chris, Reda, Fitsum, Song, Jiaming, Wang, Ting-Chun, Wei, Fangyin, Zeng, Xiaohui, Zeng, Yu, Zhang, Qinsheng
We introduce Edify Image, a family of diffusion models capable of generating photorealistic image content with pixel-perfect accuracy. Edify Image utilizes cascaded pixel-space diffusion models trained using a novel Laplacian diffusion process, in wh
Externí odkaz:
http://arxiv.org/abs/2411.07126
Although 3D Gaussian Splatting has been widely studied because of its realistic and efficient novel-view synthesis, it is still challenging to extract a high-quality surface from the point-based representation. Previous works improve the surface by i
Externí odkaz:
http://arxiv.org/abs/2406.05774
We consider the problem of novel-view synthesis (NVS) for dynamic scenes. Recent neural approaches have accomplished exceptional NVS results for static 3D scenes, but extensions to 4D time-varying scenes remain non-trivial. Prior efforts often encode
Externí odkaz:
http://arxiv.org/abs/2402.03307
Removing clutter from scenes is essential in many applications, ranging from privacy-concerned content filtering to data augmentation. In this work, we present an automatic system that removes clutter from 3D scenes and inpaints with coherent geometr
Externí odkaz:
http://arxiv.org/abs/2304.03763
Autor:
Wei, Fangyin, Chabra, Rohan, Ma, Lingni, Lassner, Christoph, Zollhöfer, Michael, Rusinkiewicz, Szymon, Sweeney, Chris, Newcombe, Richard, Slavcheva, Mira
Learning geometry, motion, and appearance priors of object classes is important for the solution of a large variety of computer vision problems. While the majority of approaches has focused on static objects, dynamic objects, especially with controll
Externí odkaz:
http://arxiv.org/abs/2205.08525
Many applications in 3D shape design and augmentation require the ability to make specific edits to an object's semantic parameters (e.g., the pose of a person's arm or the length of an airplane's wing) while preserving as much existing details as po
Externí odkaz:
http://arxiv.org/abs/2011.04755
Autor:
Scheiner, Nicolas, Kraus, Florian, Wei, Fangyin, Phan, Buu, Mannan, Fahim, Appenrodt, Nils, Ritter, Werner, Dickmann, Jürgen, Dietmayer, Klaus, Sick, Bernhard, Heide, Felix
Publikováno v:
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 2068-2077
Conventional sensor systems record information about directly visible objects, whereas occluded scene components are considered lost in the measurement process. Non-line-of-sight (NLOS) methods try to recover such hidden objects from their indirect r
Externí odkaz:
http://arxiv.org/abs/1912.06613
Publikováno v:
Elsevier, Neural Networks, Volume 110, Feb. 2019, Pages 104-115
Despite the recent success of deep learning models in numerous applications, their widespread use on mobile devices is seriously impeded by storage and computational requirements. In this paper, we propose a novel network compression method called Ad
Externí odkaz:
http://arxiv.org/abs/1906.07671
This paper proposes learning disentangled but complementary face features with minimal supervision by face identification. Specifically, we construct an identity Distilling and Dispelling Autoencoder (D2AE) framework that adversarially learns the ide
Externí odkaz:
http://arxiv.org/abs/1804.03487