Zobrazeno 1 - 10
of 35
pro vyhledávání: '"Wan, Weilin"'
Text-to-motion models excel at efficient human motion generation, but existing approaches lack fine-grained controllability over the generation process. Consequently, modifying subtle postures within a motion or inserting new actions at specific mome
Externí odkaz:
http://arxiv.org/abs/2403.13900
Out-of-distribution detection (OOD) is a crucial technique for deploying machine learning models in the real world to handle the unseen scenarios. In this paper, we first propose a simple yet effective Neural Activation Prior (NAP) for OOD detection.
Externí odkaz:
http://arxiv.org/abs/2402.18162
Autor:
Wan, Weilin, Huang, Yiming, Wu, Shutong, Komura, Taku, Wang, Wenping, Jayaraman, Dinesh, Liu, Lingjie
In this study, we introduce a learning-based method for generating high-quality human motion sequences from text descriptions (e.g., ``A person walks forward"). Existing techniques struggle with motion diversity and smooth transitions in generating a
Externí odkaz:
http://arxiv.org/abs/2312.04036
Controllable human motion synthesis is essential for applications in AR/VR, gaming and embodied AI. Existing methods often focus solely on either language or full trajectory control, lacking precision in synthesizing motions aligned with user-specifi
Externí odkaz:
http://arxiv.org/abs/2311.17135
Autor:
Dou, Zhiyang, Wu, Qingxuan, Lin, Cheng, Cao, Zeyu, Wu, Qiangqiang, Wan, Weilin, Komura, Taku, Wang, Wenping
In this paper, we introduce a set of simple yet effective TOken REduction (TORE) strategies for Transformer-based Human Mesh Recovery from monocular images. Current SOTA performance is achieved by Transformer-based structures. However, they suffer fr
Externí odkaz:
http://arxiv.org/abs/2211.10705
Autor:
Wan, Weilin, Yang, Lei, Liu, Lingjie, Zhang, Zhuoying, Jia, Ruixing, Choi, Yi-King, Pan, Jia, Theobalt, Christian, Komura, Taku, Wang, Wenping
Publikováno v:
IEEE Robotics and Automation Letters ( Volume: 7, Issue: 2, April 2022)
Understanding human intentions during interactions has been a long-lasting theme, that has applications in human-robot interaction, virtual reality and surveillance. In this study, we focus on full-body human interactions with large-sized daily objec
Externí odkaz:
http://arxiv.org/abs/2206.12612
We introduce MulayCap, a novel human performance capture method using a monocular video camera without the need for pre-scanning. The method uses "multi-layer" representations for geometry reconstruction and texture rendering, respectively. For geome
Externí odkaz:
http://arxiv.org/abs/2004.05815
Publikováno v:
IEEE International Conference on Robotics and Automation 2019
Successfully tracking the human body is an important perceptual challenge for robots that must work around people. Existing methods fall into two broad categories: geometric tracking and direct pose estimation using machine learning. While recent wor
Externí odkaz:
http://arxiv.org/abs/1908.01504
The last several years have seen significant progress in using depth cameras for tracking articulated objects such as human bodies, hands, and robotic manipulators. Most approaches focus on tracking skeletal parameters of a fixed shape model, which m
Externí odkaz:
http://arxiv.org/abs/1711.07999
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.