Title |
---|
![]() Toward Optimal LLM Alignments Using Two-Player Games Rui Zheng Hongyi Guo Zhihan Liu Xiaoying Zhang Yuanshun Yao ...Tao Gui Qi Zhang Xuanjing Huang Hang Li Yang Liu |
![]() RewardBench: Evaluating Reward Models for Language Modeling Nathan Lambert Valentina Pyatkin Jacob Morrison Lester James V. Miranda Bill Yuchen Lin ...Sachin Kumar Tom Zick Yejin Choi Noah A. Smith Hanna Hajishirzi |
![]() Human Alignment of Large Language Models through Online Preference
Optimisation Daniele Calandriello Daniel Guo Rémi Munos Mark Rowland Yunhao Tang ...Michal Valko Tianqi Liu Rishabh Joshi Zeyu Zheng Bilal Piot |