Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Cheng, Keyuan"'
The locate-then-edit paradigm has shown significant promise for knowledge editing (KE) in Large Language Models (LLMs). While previous methods perform well on single-hop fact recall tasks, they consistently struggle with multi-hop factual recall task
Externí odkaz:
http://arxiv.org/abs/2410.06331
Autor:
Cheng, Keyuan, Ali, Muhammad Asif, Yang, Shu, Lin, Gang, Zhai, Yuxuan, Fei, Haoyang, Xu, Ke, Yu, Lu, Hu, Lijie, Wang, Di
Multi-hop Question Answering (MQA) under knowledge editing (KE) is a key challenge in Large Language Models (LLMs). While best-performing solutions in this domain use a plan and solve paradigm to split a question into sub-questions followed by respon
Externí odkaz:
http://arxiv.org/abs/2405.15452
Autor:
Cheng, Keyuan, Lin, Gang, Fei, Haoyang, zhai, Yuxuan, Yu, Lu, Ali, Muhammad Asif, Hu, Lijie, Wang, Di
Multi-hop question answering (MQA) under knowledge editing (KE) has garnered significant attention in the era of large language models. However, existing models for MQA under KE exhibit poor performance when dealing with questions containing explicit
Externí odkaz:
http://arxiv.org/abs/2404.00492
Autor:
Ali, Muhammad Asif, Li, Zhengping, Yang, Shu, Cheng, Keyuan, Cao, Yang, Huang, Tianhao, Hu, Guimin, Lyu, Weimin, Hu, Lijie, Yu, Lu, Wang, Di
Large Language Models (LLMs) have shown exceptional abilities for multiple different natural language processing tasks. While prompting is a crucial tool for LLM inference, we observe that there is a significant cost associated with exceedingly lengt
Externí odkaz:
http://arxiv.org/abs/2404.00489
Autor:
Yang, Shu, Su, Jiayuan, Jiang, Han, Li, Mengdi, Cheng, Keyuan, Ali, Muhammad Asif, Hu, Lijie, Wang, Di
With the rise of large language models (LLMs), ensuring they embody the principles of being helpful, honest, and harmless (3H), known as Human Alignment, becomes crucial. While existing alignment methods like RLHF, DPO, etc., effectively fine-tune LL
Externí odkaz:
http://arxiv.org/abs/2404.00486