Zobrazeno 1 - 10
of 162
pro vyhledávání: '"Kong, Weijie"'
Autor:
Tu, Rong-Cheng, Ji, Yatai, Jiang, Jie, Kong, Weijie, Cai, Chengfei, Zhao, Wenzhe, Wang, Hongfa, Yang, Yujiu, Liu, Wei
Cross-modal alignment plays a crucial role in vision-language pre-training (VLP) models, enabling them to capture meaningful associations across different modalities. For this purpose, numerous masked modeling tasks have been proposed for VLP to furt
Externí odkaz:
http://arxiv.org/abs/2306.07096
Autor:
Ji, Yatai, Tu, Rongcheng, Jiang, Jie, Kong, Weijie, Cai, Chengfei, Zhao, Wenzhe, Wang, Hongfa, Yang, Yujiu, Liu, Wei
Cross-modal alignment is essential for vision-language pre-training (VLP) models to learn the correct corresponding information across different modalities. For this purpose, inspired by the success of masked language modeling (MLM) tasks in the NLP
Externí odkaz:
http://arxiv.org/abs/2211.13437
Autor:
Lin, Kevin Qinghong, Wang, Alex Jinpeng, Soldan, Mattia, Wray, Michael, Yan, Rui, Xu, Eric Zhongcong, Gao, Difei, Tu, Rongcheng, Zhao, Wenzhe, Kong, Weijie, Cai, Chengfei, Wang, Hongfa, Damen, Dima, Ghanem, Bernard, Liu, Wei, Shou, Mike Zheng
In this report, we propose a video-language pretraining (VLP) based solution \cite{kevin2022egovlp} for four Ego4D challenge tasks, including Natural Language Query (NLQ), Moment Query (MQ), Object State Change Classification (OSCC), and PNR Localiza
Externí odkaz:
http://arxiv.org/abs/2207.01622
Autor:
Lin, Kevin Qinghong, Wang, Alex Jinpeng, Yan, Rui, Xu, Eric Zhongcong, Tu, Rongcheng, Zhu, Yanru, Zhao, Wenzhe, Kong, Weijie, Cai, Chengfei, Wang, Hongfa, Liu, Wei, Shou, Mike Zheng
In this report, we propose a video-language pretraining (VLP) based solution \cite{kevin2022egovlp} for the EPIC-KITCHENS-100 Multi-Instance Retrieval (MIR) challenge. Especially, we exploit the recently released Ego4D dataset \cite{grauman2021ego4d}
Externí odkaz:
http://arxiv.org/abs/2207.01334
Autor:
Lin, Kevin Qinghong, Wang, Alex Jinpeng, Soldan, Mattia, Wray, Michael, Yan, Rui, Xu, Eric Zhongcong, Gao, Difei, Tu, Rongcheng, Zhao, Wenzhe, Kong, Weijie, Cai, Chengfei, Wang, Hongfa, Damen, Dima, Ghanem, Bernard, Liu, Wei, Shou, Mike Zheng
Video-Language Pretraining (VLP), which aims to learn transferable representation to advance a wide range of video-text downstream tasks, has recently received increasing attention. Best performing works rely on large-scale, 3rd-person video-text dat
Externí odkaz:
http://arxiv.org/abs/2206.01670
Tencent Text-Video Retrieval: Hierarchical Cross-Modal Interactions with Multi-Level Representations
Text-Video Retrieval plays an important role in multi-modal understanding and has attracted increasing attention in recent years. Most existing methods focus on constructing contrastive pairs between whole videos and complete caption sentences, while
Externí odkaz:
http://arxiv.org/abs/2204.03382
Autor:
Lv, Jingwei, Shi, Jianing, Ren, Yanru, Wang, Debao, Kong, Weijie, Liu, Qiang, Li, Wei, Yu, Ying, Wang, Jianxin, Liu, Wei, Chu, Paul K., Liu, Chao
Publikováno v:
In Optics Communications 1 January 2025 574
Publikováno v:
Diagnostics, Vol 14, Iss 14, p 1520 (2024)
Background: Recently, the investigation of cerebrospinal fluid (CSF) biomarkers for diagnosing human prion diseases (HPD) has garnered significant attention. Reproducibility and accuracy are paramount in biomarker research, particularly in the measur
Externí odkaz:
https://doaj.org/article/e3cd2b255bf6466f92fb81bc163bcdbc
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Autor:
Li, Yong, Wang, Xiaolei, Shao, Jiali, Liu, Xuguang, Kong, Weijie, Zhang, Zhonghua, Mao, Changming, Li, Zhenjiang, Liu, Jing, Li, Guicun
Publikováno v:
In Journal of Alloys and Compounds 5 September 2023 954