Simultaneous Reward Distillation and Preference Learning: Get You a Language Model Who Can Do Both
v1v2 (latest)

Simultaneous Reward Distillation and Preference Learning: Get You a Language Model Who Can Do Both

Papers citing "Simultaneous Reward Distillation and Preference Learning: Get You a Language Model Who Can Do Both"

50 / 61 papers shown
Title
KTO: Model Alignment as Prospect Theoretic Optimization
KTO: Model Alignment as Prospect Theoretic Optimization
Kawin Ethayarajh
Winnie Xu
Niklas Muennighoff
Dan Jurafsky
Douwe Kiela
277
569
0
02 Feb 2024
Nash Learning from Human Feedback
Nash Learning from Human Feedback
Rémi Munos
Michal Valko
Daniele Calandriello
M. G. Azar
Mark Rowland
...
Nikola Momchev
Olivier Bachem
D. Mankowitz
Doina Precup
Bilal Piot
113
147
0
01 Dec 2023

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.