Zobrazeno 1 - 10
of 23
pro vyhledávání: '"Zhanning Gao"'
Autor:
Wen Gao, Zhanning Gao, Pan Wang, Lingbo Yang, Xiansheng Hua, Peiran Ren, Siwei Ma, Xinfeng Zhang, Shanshe Wang, Chang Liu
Publikováno v:
IEEE Transactions on Image Processing. 30:2422-2435
Human pose transfer (HPT) is an emerging research topic with huge potential in fashion design, media production, online advertising and virtual reality. For these applications, the visual realism of fine-grained appearance details is crucial for prod
Publikováno v:
IEEE Transactions on Image Processing. 29:1061-1073
Recurrent neural networks (RNNs) are capable of modeling temporal dependencies of complex sequential data. In general, current available structures of RNNs tend to concentrate on controlling the contributions of current and previous information. Howe
Publikováno v:
ACM Multimedia
Fashion video synthesis has attracted increasing attention due to its huge potential in immersive media, virtual reality and online retail applications, yet traditional 3D graphic pipelines often require extensive manual labor on data capture and mod
Autor:
Ian Dixon, Zhanning Gao, Zhiqi Shen, Chunyan Miao, Peiran Ren, Pan Wang, Han Yu, Xuansong Xie, Lizhen Cui, Chang Liu, Yingxue Yu
Publikováno v:
2021 IEEE International Conference on Multimedia and Expo (ICME).
Publikováno v:
AAAI
We propose a temporal action detection by spatial segmentation framework, which simultaneously categorize actions and temporally localize action instances in untrimmed videos. The core idea is the conversion of temporal detection task into a spatial
Autor:
Chunyan Miao, Zhanning Gao, Zhiqi Shen, Xuansong Xie, Lizhen Cui, Boyang Li, Peiran Ren, Han Yu, Chang Liu
Publikováno v:
CVPR
The existence of noisy labels in real-world data negatively impacts the performance of deep learning models. Although much research effort has been devoted to improving robustness to noisy labels in classification tasks, the problem of noisy labels i
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::19a9541a501c23c527b7335c6553e7cf
http://arxiv.org/abs/2103.16047
http://arxiv.org/abs/2103.16047
A new unified video analytics framework (ER3) is proposed for complex event retrieval, recognition and recounting, based on the proposed video imprint representation, which exploits temporal correlations among image features across video frames. With
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::e10ffdbb6cba781db425dfdbbbb3a6b3
Autor:
Lingbo Yang, Zhanning Gao, Siwei Ma, Wen Gao, Peiran Ren, Xuansong Xie, Shanshe Wang, Xinfeng Zhang, Pan Wang
Publikováno v:
ICME
The ability to produce convincing textural details is essential for the fidelity of synthesized person images. However, existing methods typically follow a ``warping-based'' strategy that propagates appearance features through the same pathway used f
Autor:
Chunyan Miao, Lizhen Cui, Xuansong Xie, Ian Dixon, Peiran Ren, Han Yu, Chang Liu, Zhanning Gao, Zhiqi Shen, Pan Wang, Zhao Yong Lim
Publikováno v:
IJCAI
Scopus-Elsevier
Scopus-Elsevier
Video editing is currently a highly skill- and time-intensive process. One of the most important tasks in video editing is to compose the visual storyline. This paper outlines Visual Storyline Generator (VSG), an artificial intelligence (AI)-empowere
Publikováno v:
MultiMedia Modeling ISBN: 9783030377304
MMM (1)
MMM (1)
We present an efficient approach for action co-localization in an untrimmed video by exploiting contextual and temporal feature from multiple action proposals. Most existing action localization methods focus on each individual action instances withou
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_________::66ba9ba8070660ff4796d0360a974c9f
https://doi.org/10.1007/978-3-030-37731-1_45
https://doi.org/10.1007/978-3-030-37731-1_45