Zobrazeno 1 - 3
of 3
pro vyhledávání: '"Wang, Zekun Moore"'
Autor:
Wang, Zekun Moore, Wang, Shawn, Zhu, Kang, Liu, Jiaheng, Xu, Ke, Fu, Jie, Zhou, Wangchunshu, Huang, Wenhao
Alignment of large language models (LLMs) involves training models on preference-contrastive output pairs to adjust their responses according to human preferences. To obtain such contrastive pairs, traditional methods like RLHF and RLAIF rely on limi
Externí odkaz:
http://arxiv.org/abs/2410.13785
Autor:
Que, Haoran, Duan, Feiyu, He, Liqun, Mou, Yutao, Zhou, Wangchunshu, Liu, Jiaheng, Rong, Wenge, Wang, Zekun Moore, Yang, Jian, Zhang, Ge, Peng, Junran, Zhang, Zhaoxiang, Zhang, Songyang, Chen, Kai
In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities in various tasks (e.g., long-context understanding), and many benchmarks have been proposed. However, we observe that long text generation capabilities are not we
Externí odkaz:
http://arxiv.org/abs/2409.16191
Autor:
Wang, Zekun Moore, Peng, Zhongyuan, Que, Haoran, Liu, Jiaheng, Zhou, Wangchunshu, Wu, Yuhan, Guo, Hongcheng, Gan, Ruitong, Ni, Zehao, Yang, Jian, Zhang, Man, Zhang, Zhaoxiang, Ouyang, Wanli, Xu, Ke, Huang, Stephen W., Fu, Jie, Peng, Junran
The advent of Large Language Models (LLMs) has paved the way for complex tasks such as role-playing, which enhances user interactions by enabling models to imitate various characters. However, the closed-source nature of state-of-the-art LLMs and the
Externí odkaz:
http://arxiv.org/abs/2310.00746