Zobrazeno 1 - 10
of 118
pro vyhledávání: '"HAŠAN, MILOŠ"'
Autor:
Ma, Xiaohe, Deschaintre, Valentin, Hašan, Miloš, Luan, Fujun, Zhou, Kun, Wu, Hongzhi, Hu, Yiwei
High-quality material generation is key for virtual environment authoring and inverse rendering. We propose MaterialPicker, a multi-modal material generator leveraging a Diffusion Transformer (DiT) architecture, improving and simplifying the creation
Externí odkaz:
http://arxiv.org/abs/2412.03225
Autor:
Kuang, Zhengfei, Zhang, Tianyuan, Zhang, Kai, Tan, Hao, Bi, Sai, Hu, Yiwei, Xu, Zexiang, Hasan, Milos, Wetzstein, Gordon, Luan, Fujun
We present Buffer Anytime, a framework for estimation of depth and normal maps (which we call geometric buffers) from video that eliminates the need for paired video--depth and video--normal training data. Instead of relying on large-scale annotated
Externí odkaz:
http://arxiv.org/abs/2411.17249
Autor:
Zhang, Tianyuan, Kuang, Zhengfei, Jin, Haian, Xu, Zexiang, Bi, Sai, Tan, Hao, Zhang, He, Hu, Yiwei, Hasan, Milos, Freeman, William T., Zhang, Kai, Luan, Fujun
We propose RelitLRM, a Large Reconstruction Model (LRM) for generating high-quality Gaussian splatting representations of 3D objects under novel illuminations from sparse (4-8) posed images captured under unknown static lighting. Unlike prior inverse
Externí odkaz:
http://arxiv.org/abs/2410.06231
3D Gaussian Splatting (3DGS) has shown impressive results for the novel view synthesis task, where lighting is assumed to be fixed. However, creating relightable 3D assets, especially for objects with ill-defined shapes (fur, fabric, etc.), remains a
Externí odkaz:
http://arxiv.org/abs/2409.19702
Achieving high efficiency in modern photorealistic rendering hinges on using Monte Carlo sampling distributions that closely approximate the illumination integral estimated for every pixel. Samples are typically generated from a set of simple distrib
Externí odkaz:
http://arxiv.org/abs/2409.18974
Autor:
Li, Zixuan, Shen, Pengfei, Sun, Hanxiao, Zhang, Zibo, Guo, Yu, Liu, Ligang, Yan, Ling-Qi, Marschner, Steve, Hasan, Milos, Wang, Beibei
Accurately rendering the appearance of fabrics is challenging, due to their complex 3D microstructures and specialized optical properties. If we model the geometry and optics of fabrics down to the fiber level, we can achieve unprecedented rendering
Externí odkaz:
http://arxiv.org/abs/2409.06368
Autor:
Cai, Guangyan, Luan, Fujun, Hašan, Miloš, Zhang, Kai, Bi, Sai, Xu, Zexiang, Georgiev, Iliyan, Zhao, Shuang
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting. In this paper, we introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material att
Externí odkaz:
http://arxiv.org/abs/2408.06878
Autor:
Wiersma, Ruben, Philip, Julien, Hašan, Miloš, Mullia, Krishna, Luan, Fujun, Eisemann, Elmar, Deschaintre, Valentin
Relightable object acquisition is a key challenge in simplifying digital asset creation. Complete reconstruction of an object typically requires capturing hundreds to thousands of photographs under controlled illumination, with specialized equipment.
Externí odkaz:
http://arxiv.org/abs/2406.17774
Digitizing woven fabrics would be valuable for many applications, from digital humans to interior design. Previous work introduces a lightweight woven fabric acquisition approach by capturing a single reflection image and estimating the fabric parame
Externí odkaz:
http://arxiv.org/abs/2406.19398
Autor:
Guerrero-Viu, Julia, Hasan, Milos, Roullier, Arthur, Harikumar, Midhun, Hu, Yiwei, Guerrero, Paul, Gutierrez, Diego, Masia, Belen, Deschaintre, Valentin
Generative models have enabled intuitive image creation and manipulation using natural language. In particular, diffusion models have recently shown remarkable results for natural image editing. In this work, we propose to apply diffusion techniques
Externí odkaz:
http://arxiv.org/abs/2405.00672