Zobrazeno 1 - 10
of 265
pro vyhledávání: '"LIN, YUQI"'
Autor:
Zhou, Pengfei, Peng, Xiaopeng, Song, Jiajun, Li, Chuanhao, Xu, Zhaopan, Yang, Yue, Guo, Ziyao, Zhang, Hao, Lin, Yuqi, He, Yefei, Zhao, Lirui, Liu, Shuo, Li, Tianhua, Xie, Yuxuan, Chang, Xiaojun, Qiao, Yu, Shao, Wenqi, Zhang, Kaipeng
Multimodal Large Language Models (MLLMs) have made significant strides in visual understanding and generation tasks. However, generating interleaved image-text content remains a challenge, which requires integrated multimodal understanding and genera
Externí odkaz:
http://arxiv.org/abs/2411.18499
Autor:
Ying, Kaining, Meng, Fanqing, Wang, Jin, Li, Zhiqian, Lin, Han, Yang, Yue, Zhang, Hao, Zhang, Wenbo, Lin, Yuqi, Liu, Shuo, Lei, Jiayi, Lu, Quanfeng, Chen, Runjian, Xu, Peng, Zhang, Renrui, Zhang, Haozhe, Gao, Peng, Wang, Yali, Qiao, Yu, Luo, Ping, Zhang, Kaipeng, Shao, Wenqi
Large Vision-Language Models (LVLMs) show significant strides in general-purpose multimodal applications such as visual dialogue and embodied navigation. However, existing multimodal evaluation benchmarks cover a limited number of multimodal tasks te
Externí odkaz:
http://arxiv.org/abs/2404.16006
Autor:
Liu, Shuo, Ying, Kaining, Zhang, Hao, Yang, Yue, Lin, Yuqi, Zhang, Tianle, Li, Chuanhao, Qiao, Yu, Luo, Ping, Shao, Wenqi, Zhang, Kaipeng
This paper presents ConvBench, a novel multi-turn conversation evaluation benchmark tailored for Large Vision-Language Models (LVLMs). Unlike existing benchmarks that assess individual capabilities in single-turn dialogues, ConvBench adopts a three-l
Externí odkaz:
http://arxiv.org/abs/2403.20194
Autor:
Yang, Yue, Lin, Yuqi, Liu, Hong, Shao, Wenqi, Chen, Runjian, Shang, Hailong, Wang, Yu, Qiao, Yu, Zhang, Kaipeng, Luo, Ping
Recent text-to-image (T2I) models have had great success, and many benchmarks have been proposed to evaluate their performance and safety. However, they only consider explicit prompts while neglecting implicit prompts (hint at a target without explic
Externí odkaz:
http://arxiv.org/abs/2403.02118
Autor:
Li, Hengjia, Liu, Yang, Lin, Yuqi, Zhang, Zhanwei, Zhao, Yibo, Pan, weihang, Zheng, Tu, Yang, Zheng, Jiang, Yuchun, Wu, Boxi, Cai, Deng
Recently, generative domain adaptation has achieved remarkable progress, enabling us to adapt a pre-trained generator to a new target domain. However, existing methods simply adapt the generator to a single target domain and are limited to a single m
Externí odkaz:
http://arxiv.org/abs/2401.12596
Autor:
Lin, Yuqi, Chen, Minghao, Zhang, Kaipeng, Li, Hengjia, Li, Mingming, Yang, Zheng, Lv, Dongqin, Lin, Binbin, Liu, Haifeng, Cai, Deng
Contrastive Language-Image Pre-training (CLIP) has demonstrated impressive capabilities in open-vocabulary classification. The class token in the image encoder is trained to capture the global features to distinguish different text descriptions super
Externí odkaz:
http://arxiv.org/abs/2312.12828
Autor:
Li, Hengjia, Liu, Yang, Xia, Linxuan, Lin, Yuqi, Zheng, Tu, Yang, Zheng, Wang, Wenxiao, Zhong, Xiaohui, Ren, Xiaobo, He, Xiaofei
Can a pre-trained generator be adapted to the hybrid of multiple target domains and generate images with integrated attributes of them? In this work, we introduce a new task -- Few-shot Hybrid Domain Adaptation (HDA). Given a source generator and sev
Externí odkaz:
http://arxiv.org/abs/2310.19378
Autor:
Lin, Yuqi, Chen, Minghao, Wang, Wenxiao, Wu, Boxi, Li, Ke, Lin, Binbin, Liu, Haifeng, He, Xiaofei
Weakly supervised semantic segmentation (WSSS) with image-level labels is a challenging task. Mainstream approaches follow a multi-stage framework and suffer from high training costs. In this paper, we explore the potential of Contrastive Language-Im
Externí odkaz:
http://arxiv.org/abs/2212.09506
Previous work on action representation learning focused on global representations for short video clips. In contrast, many practical applications, such as video alignment, strongly demand learning the intensive representation of long videos. In this
Externí odkaz:
http://arxiv.org/abs/2212.03125
Autor:
Ma, Gongshan, Gao, Xiaojin, Zhang, Xin, Li, Haixia, Geng, Zhiyuan, Gao, Jing, Yang, Shuxin, Sun, Zhiruo, Lin, Yuqi, Wen, Xiaomei, Meng, Qingguo, Zhang, Leiming, Bi, Yi
Publikováno v:
In European Journal of Medicinal Chemistry 5 May 2024 271