Reward Generalization in RLHF: A Topological Perspective

Reward Generalization in RLHF: A Topological Perspective

Papers citing "Reward Generalization in RLHF: A Topological Perspective"

14 / 14 papers shown
Title
KTO: Model Alignment as Prospect Theoretic Optimization
KTO: Model Alignment as Prospect Theoretic Optimization
Kawin Ethayarajh
Winnie Xu
Niklas Muennighoff
Dan Jurafsky
Douwe Kiela
238
532
0
02 Feb 2024

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.