Zobrazeno 1 - 10
of 858
pro vyhledávání: '"GAO Yuting"'
Autor:
LIAO Jinhao, GAO Yuting, WANG Xiang, WANG Zhiwei, XU Qiang, ZHAO Yuxing, CHI Yue, MAO Jiangfeng, YANG Hongbo
Publikováno v:
Xiehe Yixue Zazhi, Vol 15, Iss 4, Pp 968-972 (2024)
Malignant insulinoma is a kind of rare and challenging neuroendocrine tumor. It is often accompanied by distant metastasis, among which liver metastasis is most common, and the prognosis is often non-promising. In this paper, we report a case of mult
Externí odkaz:
https://doaj.org/article/d2d2ac91bf614a7799c03f89d8e21361
Autor:
Pan, Wensheng, Gao, Timin, Zhang, Yan, Hu, Runze, Zheng, Xiawu, Zhang, Enwei, Gao, Yuting, Liu, Yutao, Shen, Yunhang, Li, Ke, Zhang, Shengchuan, Cao, Liujuan, Ji, Rongrong
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly. Currently, leveraging semantic information to enhance IQA is a crucial research direction. Tradit
Externí odkaz:
http://arxiv.org/abs/2404.14949
Autor:
Yang, Yuncheng, Zhang, Chuyan, Yang, Zuopeng, Gao, Yuting, Qin, Yulei, Li, Ke, Sun, Xing, Yang, Jie, Gu, Yun
Prompt learning is effective for fine-tuning foundation models to improve their generalization across a variety of downstream tasks. However, the prompts that are independently optimized along a single modality path, may sacrifice the vision-language
Externí odkaz:
http://arxiv.org/abs/2403.06136
Autor:
Cui, Xiao, Qin, Yulei, Gao, Yuting, Zhang, Enwei, Xu, Zihan, Wu, Tong, Li, Ke, Sun, Xing, Zhou, Wengang, Li, Houqiang
Knowledge distillation (KD) has been widely adopted to compress large language models (LLMs). Existing KD methods investigate various divergence measures including the Kullback-Leibler (KL), reverse Kullback-Leibler (RKL), and Jensen-Shannon (JS) div
Externí odkaz:
http://arxiv.org/abs/2402.17110
Although In-Context Learning (ICL) brings remarkable performance gains to Large Language Models (LLMs), the improvements remain lower than fine-tuning on downstream tasks. This paper introduces Multi-Modal In-Context Tuning (MMICT), a novel multi-mod
Externí odkaz:
http://arxiv.org/abs/2312.06363
Autor:
Li, Xudong, Zheng, Jingyuan, Zheng, Xiawu, Hu, Runze, Zhang, Enwei, Gao, Yuting, Shen, Yunhang, Li, Ke, Liu, Yutao, Dai, Pingyang, Zhang, Yan, Ji, Rongrong
Image Quality Assessment (IQA) with reference images have achieved great success by imitating the human vision system, in which the image quality is effectively assessed by comparing the query image with its pristine reference image. However, for the
Externí odkaz:
http://arxiv.org/abs/2312.00591
Retrieval augmentation has become an effective solution to empower large language models (LLMs) with external and verified knowledge sources from the database, which overcomes the limitations and hallucinations of LLMs in handling up-to-date and doma
Externí odkaz:
http://arxiv.org/abs/2311.11691
Autor:
Gao, Yuting, Liu, Jinfeng, Xu, Zihan, Zhang, Tong Wu Enwei, Liu, Wei, Yang, Jie, Li, Ke, Sun, Xing
During the preceding biennium, vision-language pre-training has achieved noteworthy success on several downstream tasks. Nevertheless, acquiring high-quality image-text pairs, where the pairs are entirely exclusive of each other, remains a challengin
Externí odkaz:
http://arxiv.org/abs/2303.17561
Autor:
Chen, Peixian, Zhang, Mengdan, Shen, Yunhang, Sheng, Kekai, Gao, Yuting, Sun, Xing, Li, Ke, Shen, Chunhua
Vision transformers (ViTs) are changing the landscape of object detection approaches. A natural usage of ViTs in detection is to replace the CNN-based backbone with a transformer-based backbone, which is straightforward and effective, with the price
Externí odkaz:
http://arxiv.org/abs/2206.06829
Large-scale vision-language pre-training has achieved promising results on downstream tasks. Existing methods highly rely on the assumption that the image-text pairs crawled from the Internet are in perfect one-to-one correspondence. However, in real
Externí odkaz:
http://arxiv.org/abs/2204.14095