Zobrazeno 1 - 10
of 2 891
pro vyhledávání: '"Busam, A."'
Autor:
Jung, HyunJun, Li, Weihang, Wu, Shun-Cheng, Bittner, William, Brasch, Nikolas, Song, Jifei, Pérez-Pellitero, Eduardo, Zhang, Zhensong, Moreau, Arthur, Navab, Nassir, Busam, Benjamin
Traditionally, 3d indoor datasets have generally prioritized scale over ground-truth accuracy in order to obtain improved generalization. However, using these datasets to evaluate dense geometry tasks, such as depth rendering, can be problematic as t
Externí odkaz:
http://arxiv.org/abs/2410.22715
Autor:
Vutukur, Shishir Reddy, Haugaard, Rasmus Laurvig, Huang, Junwen, Busam, Benjamin, Birdal, Tolga
Object pose distribution estimation is crucial in robotics for better path planning and handling of symmetric objects. Recent distribution estimation approaches employ contrastive learning-based approaches by maximizing the likelihood of a single pos
Externí odkaz:
http://arxiv.org/abs/2409.06683
Autor:
Vutukur, Shishir Reddy, Ba, Mengkejiergeli, Busam, Benjamin, Kayser, Matthias, Singh, Gurprit
In this paper, we propose a novel encoder-decoder architecture, named SABER, to learn the 6D pose of the object in the embedding space by learning shape representation at a given pose. This model enables us to learn pose by performing shape represent
Externí odkaz:
http://arxiv.org/abs/2408.05867
Autor:
Vutukur, Shishir Reddy, Brock, Heike, Busam, Benjamin, Birdal, Tolga, Hutter, Andreas, Ilic, Slobodan
Publikováno v:
3DV 2024
Object Pose Estimation is a crucial component in robotic grasping and augmented reality. Learning based approaches typically require training data from a highly accurate CAD model or labeled training data acquired using a complex setup. We address th
Externí odkaz:
http://arxiv.org/abs/2406.13796
Autor:
Zhai, Guangyao, Örnek, Evin Pınar, Chen, Dave Zhenyu, Liao, Ruotong, Di, Yan, Navab, Nassir, Tombari, Federico, Busam, Benjamin
We present EchoScene, an interactive and controllable generative model that generates 3D indoor scenes on scene graphs. EchoScene leverages a dual-branch diffusion model that dynamically adapts to scene graphs. Existing methods struggle to handle sce
Externí odkaz:
http://arxiv.org/abs/2405.00915
Autor:
Di Felice, Francesco, Remus, Alberto, Gasperini, Stefano, Busam, Benjamin, Ott, Lionel, Tombari, Federico, Siegwart, Roland, Avizzano, Carlo Alberto
Estimating the pose of objects through vision is essential to make robotic platforms interact with the environment. Yet, it presents many challenges, often related to the lack of flexibility and generalizability of state-of-the-art solutions. Diffusi
Externí odkaz:
http://arxiv.org/abs/2403.14279
Autor:
Stilz, Florian Philipp, Karaoglu, Mert Asim, Tristram, Felix, Navab, Nassir, Busam, Benjamin, Ladikos, Alexander
Reconstruction of endoscopic scenes is an important asset for various medical applications, from post-surgery analysis to educational training. Neural rendering has recently shown promising results in endoscopic reconstruction with deforming tissue.
Externí odkaz:
http://arxiv.org/abs/2403.12198
Recent learning methods for object pose estimation require resource-intensive training for each individual object instance or category, hampering their scalability in real applications when confronted with previously unseen objects. In this paper, we
Externí odkaz:
http://arxiv.org/abs/2403.01517
Autor:
Jung, HyunJun, Brasch, Nikolas, Song, Jifei, Perez-Pellitero, Eduardo, Zhou, Yiren, Li, Zhihao, Navab, Nassir, Busam, Benjamin
Recent advances in neural radiance fields enable novel view synthesis of photo-realistic images in dynamic settings, which can be applied to scenarios with human animation. Commonly used implicit backbones to establish accurate models, however, requi
Externí odkaz:
http://arxiv.org/abs/2312.15059
Recent advancements in 3D avatar generation excel with multi-view supervision for photorealistic models. However, monocular counterparts lag in quality despite broader applicability. We propose ReCaLaB to close this gap. ReCaLaB is a fully-differenti
Externí odkaz:
http://arxiv.org/abs/2312.04784