Zobrazeno 1 - 10
of 146
pro vyhledávání: '"Li, Guozhang"'
Contrastive Language-Image Pre-Training (CLIP) has shown impressive performance in short-term Person Re-Identification (ReID) due to its ability to extract high-level semantic features of pedestrians, yet its direct application to Cloth-Changing Pers
Externí odkaz:
http://arxiv.org/abs/2406.09198
Early weakly supervised video grounding (WSVG) methods often struggle with incomplete boundary detection due to the absence of temporal boundary annotations. To bridge the gap between video-level and boundary-level annotation, explicit-supervision me
Externí odkaz:
http://arxiv.org/abs/2312.02483
Due to the lack of temporal annotation, current Weakly-supervised Temporal Action Localization (WTAL) methods are generally stuck into over-complete or incomplete localization. In this paper, we aim to leverage the text information to boost WTAL from
Externí odkaz:
http://arxiv.org/abs/2305.00607
Weakly Supervised Temporal Action Localization (WTAL) aims to classify and localize temporal boundaries of actions for the video, given only video-level category labels in the training datasets. Due to the lack of boundary information during training
Externí odkaz:
http://arxiv.org/abs/2304.12616
Autor:
Xu Fengchang, Rayner Alfred, Rayner Henry Pailus, Lyu Ge, Du Shifeng, Jackel Vui Lung Vui Lung Chew, Li Guozhang, Wang Xinliang
Publikováno v:
IEEE Access, Vol 12, Pp 115838-115852 (2024)
A small target object refers to an object whose relative size of the bounding box is very small, usually the ratio of the width of the bounding box to the width and height of the original image is less than 0.1, or the ratio of the area of the boundi
Externí odkaz:
https://doaj.org/article/21d8d486c8ca4fd0907c08dc4742ae21
Publikováno v:
Intelligent Decision Technologies; 2024, Vol. 18 Issue 4, p2885-2899, 15p
Publikováno v:
Intelligent Decision Technologies; 2024, Vol. 18 Issue 4, p2901-2913, 13p
Publikováno v:
In Journal of Petroleum Science and Engineering October 2019 181
Publikováno v:
In Fuel 15 August 2019 250:65-78
Publikováno v:
IEEE Transactions on Neural Networks and Learning Systems; September 2024, Vol. 35 Issue: 9 p13032-13045, 14p