Zobrazeno 1 - 10
of 2 312
pro vyhledávání: '"Chen, Xiaolin"'
Autor:
Wang, Meng, Lin, Tian, Lin, Aidi, Yu, Kai, Peng, Yuanyuan, Wang, Lianyu, Chen, Cheng, Zou, Ke, Liang, Huiyu, Chen, Man, Yao, Xue, Zhang, Meiqin, Huang, Binwei, Zheng, Chaoxin, Zhang, Peixin, Chen, Wei, Luo, Yilong, Chen, Yifan, Xia, Honghe, Shi, Tingkun, Zhang, Qi, Guo, Jinming, Chen, Xiaolin, Wang, Jingcheng, Tham, Yih Chung, Liu, Dianbo, Wong, Wendy, Thakur, Sahil, Fenner, Beau, Fang, Danqi, Liu, Siying, Liu, Qingyun, Huang, Yuqiang, Zeng, Hongqiang, Meng, Yanda, Zhou, Yukun, Jiang, Zehua, Qiu, Minghui, Zhang, Changqing, Chen, Xinjian, Wang, Sophia Y, Lee, Cecilia S, Sobrin, Lucia, Cheung, Carol Y, Pang, Chi Pui, Keane, Pearse A, Cheng, Ching-Yu, Chen, Haoyu, Fu, Huazhu
Previous foundation models for retinal images were pre-trained with limited disease categories and knowledge base. Here we introduce RetiZero, a vision-language foundation model that leverages knowledge from over 400 fundus diseases. To RetiZero's pr
Externí odkaz:
http://arxiv.org/abs/2406.09317
Visual Commonsense Reasoning (VCR) calls for explanatory reasoning behind question answering over visual scenes. To achieve this goal, a model is required to provide an acceptable rationale as the reason for the predicted answers. Progress on the ben
Externí odkaz:
http://arxiv.org/abs/2405.16934
Composed image retrieval (CIR) aims to retrieve the target image based on a multimodal query, i.e., a reference image paired with corresponding modification text. Recent CIR studies leverage vision-language pre-trained (VLP) methods as the feature ex
Externí odkaz:
http://arxiv.org/abs/2404.15875
Autor:
Zan, Daoguang, Yu, Ailun, Liu, Wei, Chen, Dong, Shen, Bo, Li, Wei, Yao, Yafen, Gong, Yongshun, Chen, Xiaolin, Guan, Bei, Yang, Zhiguang, Wang, Yongji, Wang, Qianxiang, Cui, Lizhen
The impressive performance of large language models (LLMs) on code-related tasks has shown the potential of fully automated software development. In light of this, we introduce a new software engineering task, namely Natural Language to code Reposito
Externí odkaz:
http://arxiv.org/abs/2403.16443
Autor:
Becattini, Federico, Chen, Xiaolin, Puccia, Andrea, Wen, Haokun, Song, Xuemeng, Nie, Liqiang, Del Bimbo, Alberto
Recommending fashion items often leverages rich user profiles and makes targeted suggestions based on past history and previous purchases. In this paper, we work under the assumption that no prior knowledge is given about a user. We propose to build
Externí odkaz:
http://arxiv.org/abs/2402.11627
Code large language models (Code LLMs) have demonstrated remarkable performance in code generation. Nonetheless, most existing works focus on boosting code LLMs from the perspective of programming capabilities, while their natural language capabiliti
Externí odkaz:
http://arxiv.org/abs/2401.14242
Stemming from the high profile publication of Nissen and Wolski (2007) and subsequent discussions with divergent views on how to handle observed zero-total-event studies, defined to be studies which observe zero events in both treatment and control a
Externí odkaz:
http://arxiv.org/abs/2310.13178
Autor:
Xiao, Le, Chen, Xiaolin
News summary generation is an important task in the field of intelligence analysis, which can provide accurate and comprehensive information to help people better understand and respond to complex real-world events. However, traditional news summary
Externí odkaz:
http://arxiv.org/abs/2307.02839
Textual response generation is an essential task for multimodal task-oriented dialog systems.Although existing studies have achieved fruitful progress, they still suffer from two critical limitations: 1) focusing on the attribute knowledge but ignori
Externí odkaz:
http://arxiv.org/abs/2305.09990