Zobrazeno 1 - 10
of 328
pro vyhledávání: '"Benes, Bedrich"'
Autor:
Lee, Jae Joong, Benes, Bedrich
We introduce RGB2Point, an unposed single-view RGB image to a 3D point cloud generation based on Transformer. RGB2Point takes an input image of an object and generates a dense 3D point cloud. Contrary to prior works based on CNN layers and diffusion
Externí odkaz:
http://arxiv.org/abs/2407.14979
Autor:
Lee, Jae Joong, Li, Bosheng, Beery, Sara, Huang, Jonathan, Fei, Songlin, Yeh, Raymond A., Benes, Bedrich
We introduce Tree D-fusion, featuring the first collection of 600,000 environmentally aware, 3D simulation-ready tree models generated through Diffusion priors. Each reconstructed 3D tree model corresponds to an image from Google's Auto Arborist Data
Externí odkaz:
http://arxiv.org/abs/2407.10330
Autor:
Kałużny, Jacek, Schreckenberg, Yannik, Cyganik, Karol, Annighöfer, Peter, Pirk, Sören, Michels, Dominik L., Cieslak, Mikolaj, Assaad-Gerbert, Farhah, Benes, Bedrich, Pałubicki, Wojciech
We introduce LAESI, a Synthetic Leaf Dataset of 100,000 synthetic leaf images on millimeter paper, each with semantic masks and surface area labels. This dataset provides a resource for leaf morphology analysis primarily aimed at beech and oak leaves
Externí odkaz:
http://arxiv.org/abs/2404.00593
Autor:
Fernandez, Jorge Askur Vazquez, Lee, Jae Joong, Vacca, Santiago Andrés Serrano, Magana, Alejandra, Benes, Bedrich, Popescu, Voicu
The paper introduces Hands-Free VR, a voice-based natural-language interface for VR. The user gives a command using their voice, the speech audio data is converted to text using a speech-to-text deep learning model that is fine-tuned for robustness t
Externí odkaz:
http://arxiv.org/abs/2402.15083
Autor:
Ling, Lu, Sheng, Yichen, Tu, Zhi, Zhao, Wentian, Xin, Cheng, Wan, Kun, Yu, Lantao, Guo, Qianyu, Yu, Zixun, Lu, Yawen, Li, Xuanmao, Sun, Xingpeng, Ashok, Rohan, Mukherjee, Aniruddha, Kang, Hao, Kong, Xiangrui, Hua, Gang, Zhang, Tianyi, Benes, Bedrich, Bera, Aniket
We have witnessed significant progress in deep learning-based 3D vision, ranging from neural radiance field (NeRF) based 3D representation learning to applications in novel view synthesis (NVS). However, existing scene-level datasets for deep learnin
Externí odkaz:
http://arxiv.org/abs/2312.16256
Autor:
Sheng, Yichen, Yu, Zixun, Ling, Lu, Cao, Zhiwen, Zhang, Cecilia, Lu, Xin, Xian, Ke, Lin, Haiting, Benes, Bedrich
Bokeh is widely used in photography to draw attention to the subject while effectively isolating distractions in the background. Computational methods simulate bokeh effects without relying on a physical camera lens. However, in the realm of digital
Externí odkaz:
http://arxiv.org/abs/2308.08843
In this paper, we propose DeepTree, a novel method for modeling trees based on learning developmental rules for branching structures instead of manually defining them. We call our deep neural model situated latent because its behavior is determined b
Externí odkaz:
http://arxiv.org/abs/2305.05153
Many terrain modelling methods have been proposed for the past decades, providing efficient and often interactive authoring tools. However, they generally do not include any notion of style, which is a critical aspect for designers in the entertainme
Externí odkaz:
http://arxiv.org/abs/2304.09626
Autor:
Lee, Jae Joong, Benes, Bedrich
Deep learning-based 3D object reconstruction has achieved unprecedented results. Among those, the transformer deep neural model showed outstanding performance in many applications of computer vision. We introduce SnakeVoxFormer, a novel, 3D object re
Externí odkaz:
http://arxiv.org/abs/2303.16293
Autor:
Sheng, Yichen, Zhang, Jianming, Philip, Julien, Hold-Geoffroy, Yannick, Sun, Xin, Zhang, HE, Ling, Lu, Benes, Bedrich
Lighting effects such as shadows or reflections are key in making synthetic images realistic and visually appealing. To generate such effects, traditional computer graphics uses a physically-based renderer along with 3D geometry. To compensate for th
Externí odkaz:
http://arxiv.org/abs/2303.00137