Zobrazeno 1 - 10
of 149
pro vyhledávání: '"LI Mengcheng"'
In this paper, we propose TextDestroyer, the first training- and annotation-free method for scene text destruction using a pre-trained diffusion model. Existing scene text removal models require complex annotation and retraining, and may leave faint
Externí odkaz:
http://arxiv.org/abs/2411.00355
Autor:
Zhang, Jiajun, Zhang, Yuxiang, An, Liang, Li, Mengcheng, Zhang, Hongwen, Hu, Zonghai, Liu, Yebin
Dynamic and dexterous manipulation of objects presents a complex challenge, requiring the synchronization of hand motions with the trajectories of objects to achieve seamless and physically plausible interactions. In this work, we introduce ManiDext,
Externí odkaz:
http://arxiv.org/abs/2409.09300
Recent years have witnessed a trend of the deep integration of the generation and reconstruction paradigms. In this paper, we extend the ability of controllable generative models for a more comprehensive hand mesh recovery task: direct hand mesh gene
Externí odkaz:
http://arxiv.org/abs/2406.01334
Autor:
Lin, Dixuan, Zhang, Yuxiang, Li, Mengcheng, Liu, Yebin, Jing, Wei, Yan, Qi, Wang, Qianying, Zhang, Hongwen
In this paper, we introduce OmniHands, a universal approach to recovering interactive hand meshes and their relative movement from monocular or multi-view inputs. Our approach addresses two major limitations of previous methods: lacking a unified sol
Externí odkaz:
http://arxiv.org/abs/2405.20330
Autor:
Hu, Junxing, Zhang, Hongwen, Chen, Zerui, Li, Mengcheng, Wang, Yunlong, Liu, Yebin, Sun, Zhenan
Reconstructing hand-held objects from monocular RGB images is an appealing yet challenging task. In this task, contacts between hands and objects provide important cues for recovering the 3D geometry of the hand-held objects. Though recent works have
Externí odkaz:
http://arxiv.org/abs/2305.20089
Autor:
Zhang, Hongwen, Tian, Yating, Zhang, Yuxiang, Li, Mengcheng, An, Liang, Sun, Zhenan, Liu, Yebin
We present PyMAF-X, a regression-based approach to recovering parametric full-body models from monocular images. This task is very challenging since minor parametric deviation may lead to noticeable misalignment between the estimated mesh and the inp
Externí odkaz:
http://arxiv.org/abs/2207.06400
Autor:
Yu, Junlin, Wang, Xiaolian, Li, Jianfei, Luo, Debiao, Li, Mengcheng, Li, Ruixuan, He, Zhongping, Dong, Jiangfeng, Wang, Qingyuan, Guan, Zhongwei
Publikováno v:
In Thin-Walled Structures December 2024 205 Part B
Autor:
Li, Chunyan, Li, Mengcheng, Zhao, Zhenhao, Khan, Afsar, Zhao, Tianrui, Liu, Yaping, Wang, Zhengxuan, Cheng, Guiguang
Publikováno v:
In Food Chemistry 30 October 2024 456
Autor:
Wei, Huaqin, Lu, Surui, Chen, Mingqing, Yao, Runming, Yan, Biao, Li, Qing, Song, Xiaoli, Li, Mengcheng, Wu, Yang, Yang, Xu, Ma, Ping
Publikováno v:
In Science of the Total Environment 10 October 2024 946
Graph convolutional network (GCN) has achieved great success in single hand reconstruction task, while interacting two-hand reconstruction by GCN remains unexplored. In this paper, we present Interacting Attention Graph Hand (IntagHand), the first gr
Externí odkaz:
http://arxiv.org/abs/2203.09364