Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2409.00162
Cited By
Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback
30 August 2024
Jiayi Zhou
Yalan Qin
Juntao Dai
Yaodong Yang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Sequence to Sequence Reward Modeling: Improving RLHF by Language Feedback"
1 / 1 papers shown
Title
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
372
12,081
0
04 Mar 2022
1