Zobrazeno 1 - 1
of 1
pro vyhledávání: '"Talabzadeh, Arya"'
Autor:
Xu, Tengyu, Helenowski, Eryk, Sankararaman, Karthik Abinav, Jin, Di, Peng, Kaiyan, Han, Eric, Nie, Shaoliang, Zhu, Chen, Zhang, Hejia, Zhou, Wenxuan, Zeng, Zhouhao, He, Yun, Mandyam, Karishma, Talabzadeh, Arya, Khabsa, Madian, Cohen, Gabriel, Tian, Yuandong, Ma, Hao, Wang, Sinong, Fang, Han
Reinforcement learning from human feedback (RLHF) has become the leading approach for fine-tuning large language models (LLM). However, RLHF has limitations in multi-task learning (MTL) due to challenges of reward hacking and extreme multi-objective
Externí odkaz:
http://arxiv.org/abs/2409.20370