Zobrazeno 1 - 10
of 15 772
pro vyhledávání: '"CHEN, QIANG"'
Autor:
Chen, Qiang
Indiana University-Purdue University Indianapolis (IUPUI)
Food allergy is a highly prevalent and serious disease regulated by immunoglobin E (IgE) antibodies specific for food allergens.The development of IgE is regulated by T follicular helper
Food allergy is a highly prevalent and serious disease regulated by immunoglobin E (IgE) antibodies specific for food allergens.The development of IgE is regulated by T follicular helper
Externí odkaz:
https://hdl.handle.net/1805/33191
Weakly-Supervised Dense Video Captioning (WSDVC) aims to localize and describe all events of interest in a video without requiring annotations of event boundaries. This setting poses a great challenge in accurately locating the temporal location of e
Externí odkaz:
http://arxiv.org/abs/2412.12791
Multimodal RLHF usually happens after supervised finetuning (SFT) stage to continually improve vision-language models' (VLMs) comprehension. Conventional wisdom holds its superiority over continual SFT during this preference alignment stage. In this
Externí odkaz:
http://arxiv.org/abs/2411.14797
Autor:
Sun, Yanpeng, Zhang, Huaxin, Chen, Qiang, Zhang, Xinyu, Sang, Nong, Zhang, Gang, Wang, Jingdong, Li, Zechao
We focus on improving the visual understanding capability for boosting the vision-language models. We propose \textbf{Arcana}, a multiModal language model, which introduces two crucial techniques. First, we present Multimodal LoRA (MM-LoRA), a module
Externí odkaz:
http://arxiv.org/abs/2410.13733
Autor:
Senbi, Ahjol, Huang, Tianyu, Lyu, Fei, Li, Qing, Tao, Yuhui, Shao, Wei, Chen, Qiang, Wang, Chengyan, Wang, Shuo, Zhou, Tao, Zhang, Yizhe
We explore the feasibility and potential of building a ground-truth-free evaluation model to assess the quality of segmentations generated by the Segment Anything Model (SAM) and its variants in medical imaging. This evaluation model estimates segmen
Externí odkaz:
http://arxiv.org/abs/2409.14874
Autor:
Hao, Jing, Zhao, Yuxiang, Chen, Song, Sun, Yanpeng, Chen, Qiang, Zhang, Gang, Yao, Kun, Ding, Errui, Wang, Jingdong
Multimodal Large Language Models (MLLMs) have shown promise in a broad range of vision-language tasks with their strong reasoning and generalization capabilities. However, they heavily depend on high-quality data in the Supervised Fine-Tuning (SFT) p
Externí odkaz:
http://arxiv.org/abs/2409.13540
Publikováno v:
IEEE Journal of Biomedical and Health Informatics, Volume: 24, Issue: 4, pp. 1125 - 1136, 2020
The presence of hyperreflective foci (HFs) is related to retinal disease progression, and the quantity has proven to be a prognostic factor of visual and anatomical outcome in various retinal diseases. However, lack of efficient quantitative tools fo
Externí odkaz:
http://arxiv.org/abs/2407.21272
Autor:
Wang, Yu, Su, Xiangbo, Chen, Qiang, Zhang, Xinyu, Xi, Teng, Yao, Kun, Ding, Errui, Zhang, Gang, Wang, Jingdong
Open-vocabulary object detection focusing on detecting novel categories guided by natural language. In this report, we propose Open-Vocabulary Light-Weighted Detection Transformer (OVLW-DETR), a deployment friendly open-vocabulary detector with stron
Externí odkaz:
http://arxiv.org/abs/2407.10655
Publikováno v:
Data Intelligence, Vol 1, Iss 3, Pp 224-237 (2019)
Online marketers make efforts to sell more products to their customers to increase turnover. One way to sell more products is to ensure that products belonging to the same scene are offered together. As such, it is beneficial to categorize products i
Externí odkaz:
https://doaj.org/article/182ad069622045e4beecfdfd79c98746
In this paper, we propose a novel approach to enhance medical image segmentation during test time. Instead of employing hand-crafted transforms or functions on the input test image to create multiple views for test-time augmentation, we advocate for
Externí odkaz:
http://arxiv.org/abs/2406.17608