Zobrazeno 1 - 10
of 32
pro vyhledávání: '"Shimada, Soshi"'
Autor:
Wu, Qingxuan, Dou, Zhiyang, Xu, Sirui, Shimada, Soshi, Wang, Chen, Yu, Zhengming, Liu, Yuan, Lin, Cheng, Cao, Zeyu, Komura, Taku, Golyanik, Vladislav, Theobalt, Christian, Wang, Wenping, Liu, Lingjie
Reconstructing 3D hand-face interactions with deformations from a single image is a challenging yet crucial task with broad applications in AR, VR, and gaming. The challenges stem from self-occlusions during single-view hand-face interactions, divers
Externí odkaz:
http://arxiv.org/abs/2406.17988
Autor:
Shimada, Soshi, Mueller, Franziska, Bednarik, Jan, Doosti, Bardia, Bickel, Bernd, Tang, Danhang, Golyanik, Vladislav, Taylor, Jonathan, Theobalt, Christian, Beeler, Thabo
The physical properties of an object, such as mass, significantly affect how we manipulate it with our hands. Surprisingly, this aspect has so far been neglected in prior work on 3D motion synthesis. To improve the naturalness of the synthesized 3D h
Externí odkaz:
http://arxiv.org/abs/2312.14929
Existing methods for 3D tracking from monocular RGB videos predominantly consider articulated and rigid objects. Modelling dense non-rigid object deformations in this setting remained largely unaddressed so far, although such effects can improve the
Externí odkaz:
http://arxiv.org/abs/2309.16670
Publikováno v:
International Conference on 3D Vision 2022 (Oral)
3D human motion capture from monocular RGB images respecting interactions of a subject with complex and possibly deformable environments is a very challenging, ill-posed and under-explored problem. Existing methods address it only weakly and do not m
Externí odkaz:
http://arxiv.org/abs/2208.08439
Autor:
Akada, Hiroyasu, Wang, Jian, Shimada, Soshi, Takahashi, Masaki, Theobalt, Christian, Golyanik, Vladislav
Publikováno v:
European Conference on Computer Vision (ECCV) 2022
We present UnrealEgo, i.e., a new large-scale naturalistic dataset for egocentric 3D human pose estimation. UnrealEgo is based on an advanced concept of eyeglasses equipped with two fisheye cameras that can be used in unconstrained environments. We d
Externí odkaz:
http://arxiv.org/abs/2208.01633
Autor:
Johnson, Erik C. M., Habermann, Marc, Shimada, Soshi, Golyanik, Vladislav, Theobalt, Christian
Capturing general deforming scenes from monocular RGB video is crucial for many computer graphics and vision applications. However, current approaches suffer from drawbacks such as struggling with large scene deformations, inaccurate shape completion
Externí odkaz:
http://arxiv.org/abs/2206.08368
Autor:
Shimada, Soshi, Golyanik, Vladislav, Li, Zhi, Pérez, Patrick, Xu, Weipeng, Theobalt, Christian
Marker-less monocular 3D human motion capture (MoCap) with scene interactions is a challenging research topic relevant for extended reality, robotics and virtual avatar generation. Due to the inherent depth ambiguity of monocular settings, 3D motions
Externí odkaz:
http://arxiv.org/abs/2205.05677
Autor:
Yi, Xinyu, Zhou, Yuxiao, Habermann, Marc, Shimada, Soshi, Golyanik, Vladislav, Theobalt, Christian, Xu, Feng
Motion capture from sparse inertial sensors has shown great potential compared to image-based approaches since occlusions do not lead to a reduced tracking quality and the recording space is not restricted to be within the viewing frustum of the came
Externí odkaz:
http://arxiv.org/abs/2203.08528
Publikováno v:
International Conference on Computer Vision (ICCV) 2021
This paper proposes GraviCap, i.e., a new approach for joint markerless 3D human motion capture and object trajectory estimation from monocular RGB videos. We focus on scenes with objects partially observed during a free flight. In contrast to existi
Externí odkaz:
http://arxiv.org/abs/2108.08844
Autor:
Malik, Jameel, Shimada, Soshi, Elhayek, Ahmed, Ali, Sk Aziz, Theobalt, Christian, Golyanik, Vladislav, Stricker, Didier
Publikováno v:
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
3D hand shape and pose estimation from a single depth map is a new and challenging computer vision problem with many applications. Existing methods addressing it directly regress hand meshes via 2D convolutional neural networks, which leads to artefa
Externí odkaz:
http://arxiv.org/abs/2107.01205