Zobrazeno 1 - 10
of 1 929
pro vyhledávání: '"Chan, Eric"'
Autor:
Levy, Axel, Chan, Eric R., Fridovich-Keil, Sara, Poitevin, Frédéric, Zhong, Ellen D., Wetzstein, Gordon
The interaction of a protein with its environment can be understood and controlled via its 3D structure. Experimental methods for protein structure determination, such as X-ray crystallography or cryogenic electron microscopy, shed light on biologica
Externí odkaz:
http://arxiv.org/abs/2406.04239
Autor:
Sargent, Kyle, Li, Zizhang, Shah, Tanmay, Herrmann, Charles, Yu, Hong-Xing, Zhang, Yunzhi, Chan, Eric Ryan, Lagun, Dmitry, Fei-Fei, Li, Sun, Deqing, Wu, Jiajun
We introduce a 3D-aware diffusion model, ZeroNVS, for single-image novel view synthesis for in-the-wild scenes. While existing methods are designed for single objects with masked backgrounds, we propose new techniques to address challenges introduced
Externí odkaz:
http://arxiv.org/abs/2310.17994
Autor:
Po, Ryan, Yifan, Wang, Golyanik, Vladislav, Aberman, Kfir, Barron, Jonathan T., Bermano, Amit H., Chan, Eric Ryan, Dekel, Tali, Holynski, Aleksander, Kanazawa, Angjoo, Liu, C. Karen, Liu, Lingjie, Mildenhall, Ben, Nießner, Matthias, Ommer, Björn, Theobalt, Christian, Wonka, Peter, Wetzstein, Gordon
The field of visual computing is rapidly advancing due to the emergence of generative artificial intelligence (AI), which unlocks unprecedented capabilities for the generation, editing, and reconstruction of images, videos, and 3D scenes. In these do
Externí odkaz:
http://arxiv.org/abs/2310.07204
Autor:
Lin, Connor Z., Nagano, Koki, Kautz, Jan, Chan, Eric R., Iqbal, Umar, Guibas, Leonidas, Wetzstein, Gordon, Khamis, Sameh
There is a growing demand for the accessible creation of high-quality 3D avatars that are animatable and customizable. Although 3D morphable models provide intuitive control for editing and animation, and robustness for single-view face reconstructio
Externí odkaz:
http://arxiv.org/abs/2305.03043
Autor:
Trevithick, Alex, Chan, Matthew, Stengel, Michael, Chan, Eric R., Liu, Chao, Yu, Zhiding, Khamis, Sameh, Chandraker, Manmohan, Ramamoorthi, Ravi, Nagano, Koki
We present a one-shot method to infer and render a photorealistic 3D representation from a single unposed image (e.g., face portrait) in real-time. Given a single RGB input, our image encoder directly predicts a canonical triplane representation of a
Externí odkaz:
http://arxiv.org/abs/2305.02310
Byzantine quorum systems provide higher throughput than proof-of-work and incur modest energy consumption. Further, their modern incarnations incorporate personalized and heterogeneous trust. Thus, they are emerging as an appealing candidate for glob
Externí odkaz:
http://arxiv.org/abs/2304.04979
Autor:
Chan, Eric R., Nagano, Koki, Chan, Matthew A., Bergman, Alexander W., Park, Jeong Joon, Levy, Axel, Aittala, Miika, De Mello, Shalini, Karras, Tero, Wetzstein, Gordon
We present a diffusion-based model for 3D-aware generative novel view synthesis from as few as a single input image. Our model samples from the distribution of possible renderings consistent with the input and, even in the presence of ambiguity, is c
Externí odkaz:
http://arxiv.org/abs/2304.02602
Autor:
Yu, Hong-Xing, Guo, Michelle, Fathi, Alireza, Chang, Yen-Yu, Chan, Eric Ryan, Gao, Ruohan, Funkhouser, Thomas, Wu, Jiajun
Publikováno v:
Transactions on Machine Learning Research (TMLR), 2023
Photorealistic object appearance modeling from 2D images is a constant topic in vision and graphics. While neural implicit methods (such as Neural Radiance Fields) have shown high-fidelity view synthesis results, they cannot relight the captured obje
Externí odkaz:
http://arxiv.org/abs/2303.06138
Capturing images is a key part of automation for high-level tasks such as scene text recognition. Low-light conditions pose a challenge for high-level perception stacks, which are often optimized on well-lit, artifact-free images. Reconstruction meth
Externí odkaz:
http://arxiv.org/abs/2303.04291
Diffusion models have emerged as the state-of-the-art for image generation, among other tasks. Here, we present an efficient diffusion-based model for 3D-aware generation of neural fields. Our approach pre-processes training data, such as ShapeNet me
Externí odkaz:
http://arxiv.org/abs/2211.16677