Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Qing, Lizhi"'
Autor:
Liu, Chengyuan, Wang, Shihang, Qing, Lizhi, Kuang, Kun, Kang, Yangyang, Sun, Changlong, Wu, Fei
While Large Language Models (LLMs) demonstrate impressive generation abilities, they frequently struggle when it comes to specialized domains due to their limited domain-specific knowledge. Studies on domain-specific LLMs resort to expanding the voca
Externí odkaz:
http://arxiv.org/abs/2410.01188
Autor:
Liu, Chengyuan, Kang, Yangyang, Wang, Shihang, Qing, Lizhi, Zhao, Fubang, Sun, Changlong, Kuang, Kun, Wu, Fei
The performance on general tasks decreases after Large Language Models (LLMs) are fine-tuned on domain-specific tasks, the phenomenon is known as Catastrophic Forgetting (CF). However, this paper presents a further challenge for real application of d
Externí odkaz:
http://arxiv.org/abs/2405.17830
Autor:
Xiong, Zi, Qing, Lizhi, Kang, Yangyang, Liu, Jiawei, Li, Hongsong, Sun, Changlong, Liu, Xiaozhong, Lu, Wei
The widespread use of pre-trained language models (PLMs) in natural language processing (NLP) has greatly improved performance outcomes. However, these models' vulnerability to adversarial attacks (e.g., camouflaged hints from drug dealers), particul
Externí odkaz:
http://arxiv.org/abs/2404.12014
Autor:
Ma, Yongqiang, Qing, Lizhi, Liu, Jiawei, Kang, Yangyang, Zhang, Yue, Lu, Wei, Liu, Xiaozhong, Cheng, Qikai
Evaluating large language models (LLMs) is fundamental, particularly in the context of practical applications. Conventional evaluation methods, typically designed primarily for LLM development, yield numerical scores that ignore the user experience.
Externí odkaz:
http://arxiv.org/abs/2404.07108
Large Language Models (LLMs) have exhibited remarkable proficiency in comprehending and generating natural language. On the other hand, personalized LLM response generation holds the potential to offer substantial benefits for individuals in critical
Externí odkaz:
http://arxiv.org/abs/2404.03565
Autor:
Liu, Chengyuan, Zhao, Fubang, Qing, Lizhi, Kang, Yangyang, Sun, Changlong, Kuang, Kun, Wu, Fei
Large Language Models (LLMs) presents significant priority in text understanding and generation. However, LLMs suffer from the risk of generating harmful contents especially while being employed to applications. There are several black-box attack met
Externí odkaz:
http://arxiv.org/abs/2309.11830
Multi-task learning (MTL) has received considerable attention, and numerous deep learning applications benefit from MTL with multiple objectives. However, constructing multiple related tasks is difficult, and sometimes only a single task is available
Externí odkaz:
http://arxiv.org/abs/1911.07518