Prior Constraints-based Reward Model Training for Aligning Large Language Models

Autor: Zhou, Hang, Wang, Chenglong, Hu, Yimin, Xiao, Tong, Zhang, Chunliang, Zhu, Jingbo
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: Reinforcement learning with human feedback for aligning large language models (LLMs) trains a reward model typically using ranking loss with comparison pairs.However, the training procedure suffers from an inherent problem: the uncontrolled scaling of reward scores during reinforcement learning due to the lack of constraints while training the reward model.This paper proposes a Prior Constraints-based Reward Model (namely PCRM) training method to mitigate this problem. PCRM incorporates prior constraints, specifically, length ratio and cosine similarity between outputs of each comparison pair, during reward model training to regulate optimization magnitude and control score margins. We comprehensively evaluate PCRM by examining its rank correlation with human preferences and its effectiveness in aligning LLMs via RL. Experimental results demonstrate that PCRM significantly improves alignment performance by effectively constraining reward score scaling. As another bonus, our method is easily integrated into arbitrary rank-based alignment methods, such as direct preference optimization, and can yield consistent improvement.
Comment: Accepted by CCL 2024
Databáze: arXiv