Zobrazeno 1 - 10
of 1 364
pro vyhledávání: '"Ji, Ke"'
Autor:
Chen, Junying, Ouyang, Ruyi, Gao, Anningzhe, Chen, Shunian, Chen, Guiming Hardy, Wang, Xidong, Zhang, Ruifei, Cai, Zhenyang, Ji, Ke, Yu, Guangjun, Wan, Xiang, Wang, Benyou
The rapid development of multimodal large language models (MLLMs), such as GPT-4V, has led to significant advancements. However, these models still face challenges in medical multimodal capabilities due to limitations in the quantity and quality of m
Externí odkaz:
http://arxiv.org/abs/2406.19280
Autor:
Xie, Wenya, Xiao, Qingying, Zheng, Yu, Wang, Xidong, Chen, Junying, Ji, Ke, Gao, Anningzhe, Wan, Xiang, Jiang, Feng, Wang, Benyou
The recent success of Large Language Models (LLMs) has had a significant impact on the healthcare field, providing patients with medical advice, diagnostic information, and more. However, due to a lack of professional medical knowledge, patients are
Externí odkaz:
http://arxiv.org/abs/2406.18034
In the quest for super-human performance, Large Language Models (LLMs) have traditionally been tethered to human-annotated datasets and predefined training objectives-a process that is both labor-intensive and inherently limited. This paper presents
Externí odkaz:
http://arxiv.org/abs/2406.00606
Autor:
Liu, Jiajun, Ke, Wenjun, Wang, Peng, Shang, Ziyu, Gao, Jinhua, Li, Guozheng, Ji, Ke, Liu, Yanhe
Traditional knowledge graph embedding (KGE) methods typically require preserving the entire knowledge graph (KG) with significant training costs when new knowledge emerges. To address this issue, the continual knowledge graph embedding (CKGE) task ha
Externí odkaz:
http://arxiv.org/abs/2405.04453
Autor:
Li, Guozheng, Wang, Peng, Ke, Wenjun, Guo, Yikai, Ji, Ke, Shang, Ziyu, Liu, Jiajun, Xu, Zijie
Relation extraction (RE) aims to identify relations between entities mentioned in texts. Although large language models (LLMs) have demonstrated impressive in-context learning (ICL) abilities in various tasks, they still suffer from poor performances
Externí odkaz:
http://arxiv.org/abs/2404.17809
Relation extraction (RE) is an important task that aims to identify the relationships between entities in texts. While large language models (LLMs) have revealed remarkable in-context learning (ICL) capability for general zero and few-shot learning,
Externí odkaz:
http://arxiv.org/abs/2404.17807
Dialogue relation extraction (DRE) aims to extract relations between two arguments within a dialogue, which is more challenging than standard RE due to the higher person pronoun frequency and lower information density in dialogues. However, existing
Externí odkaz:
http://arxiv.org/abs/2404.17802
Autor:
Li, Guozheng, Ke, Wenjun, Wang, Peng, Xu, Zijie, Ji, Ke, Liu, Jiajun, Shang, Ziyu, Luo, Qiqing
The in-context learning (ICL) for relational triple extraction (RTE) has achieved promising performance, but still encounters two key challenges: (1) how to design effective prompts and (2) how to select proper demonstrations. Existing methods, howev
Externí odkaz:
http://arxiv.org/abs/2402.13741
Autor:
Gao, Jingsheng, Ruan, Jiacheng, Xiang, Suncheng, Yu, Zefang, Ji, Ke, Xie, Mingye, Liu, Ting, Fu, Yuzhuo
With the success of pre-trained visual-language (VL) models such as CLIP in visual representation tasks, transferring pre-trained models to downstream tasks has become a crucial paradigm. Recently, the prompt tuning paradigm, which draws inspiration
Externí odkaz:
http://arxiv.org/abs/2312.08212
Due to the complex label hierarchy and intensive labeling cost in practice, the hierarchical text classification (HTC) suffers a poor performance especially when low-resource or few-shot settings are considered. Recently, there is a growing trend of
Externí odkaz:
http://arxiv.org/abs/2305.16885