Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.18290
Cited By
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
29 May 2023
Rafael Rafailov
Archit Sharma
E. Mitchell
Stefano Ermon
Christopher D. Manning
Chelsea Finn
ALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Direct Preference Optimization: Your Language Model is Secretly a Reward Model"
50 / 2,637 papers shown
Title
Probabilistic Inference in Language Models via Twisted Sequential Monte Carlo
Stephen Zhao
Rob Brekelmans
Alireza Makhzani
Roger C. Grosse
47
33
0
26 Apr 2024
Introducing cosmosGPT: Monolingual Training for Turkish Language Models
Himmet Toprak Kesgin
M. K. Yuce
Eren Dogan
M. E. Uzun
Atahan Uz
H. E. Seyrek
Ahmed Zeer
M. Amasyalı
48
9
0
26 Apr 2024
Near to Mid-term Risks and Opportunities of Open-Source Generative AI
Francisco Eiras
Aleksandar Petrov
Bertie Vidgen
Christian Schroeder de Witt
Fabio Pizzati
...
Paul Röttger
Philip Torr
Trevor Darrell
Y. Lee
Jakob N. Foerster
51
6
0
25 Apr 2024
Continual Learning of Large Language Models: A Comprehensive Survey
Haizhou Shi
Zihao Xu
Hengyi Wang
Weiyi Qin
Wenyuan Wang
Yibin Wang
Zifeng Wang
Sayna Ebrahimi
Hao Wang
CLL
KELM
LRM
59
66
0
25 Apr 2024
REBEL: Reinforcement Learning via Regressing Relative Rewards
Zhaolin Gao
Jonathan D. Chang
Wenhao Zhan
Owen Oertell
Gokul Swamy
Kianté Brantley
Thorsten Joachims
J. Andrew Bagnell
Jason D. Lee
Wen Sun
OffRL
48
31
0
25 Apr 2024
Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model
Runzhe Zhan
Xinyi Yang
Derek F. Wong
Lidia S. Chao
Yue Zhang
63
10
0
25 Apr 2024
Tele-FLM Technical Report
Xiang Li
Yiqun Yao
Xin Jiang
Xuezhi Fang
Chao Wang
...
Yequan Wang
Zhongjiang He
Zhongyuan Wang
Xuelong Li
Tiejun Huang
43
4
0
25 Apr 2024
Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare
Emre Can Acikgoz
Osman Batur .Ince
Rayene Bench
Arda Anil Boz
.Ilker Kesen
Aykut Erdem
Erkut Erdem
LM&MA
40
10
0
25 Apr 2024
Fake Artificial Intelligence Generated Contents (FAIGC): A Survey of Theories, Detection Methods, and Opportunities
Xiaomin Yu
Yezhaohui Wang
Yanfang Chen
Zhen Tao
Dinghao Xi
Shichao Song
Pengnian Qi
Zhiyu Li
77
8
0
25 Apr 2024
Towards Adapting Open-Source Large Language Models for Expert-Level Clinical Note Generation
Hanyin Wang
Chufan Gao
Bolun Liu
Qiping Xu
Guleid Hussein
Mohamad El Labban
Kingsley Iheasirim
H. Korsapati
Chuck Outcalt
Jimeng Sun
LM&MA
AI4MH
40
2
0
25 Apr 2024
Online Personalizing White-box LLMs Generation with Neural Bandits
Zekai Chen
Weeden Daniel
Po-yu Chen
Francois Buet-Golfouse
44
3
0
24 Apr 2024
Assessing The Potential Of Mid-Sized Language Models For Clinical QA
Elliot Bolton
Betty Xiong
Vijaytha Muralidharan
J. Schamroth
Vivek Muralidharan
Christopher D. Manning
R. Daneshjou
AI4MH
ELM
LM&MA
37
4
0
24 Apr 2024
From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large Language Models
Qi He
Jie Zeng
Qianxi He
Jiaqing Liang
Yanghua Xiao
39
10
0
24 Apr 2024
CultureBank: An Online Community-Driven Knowledge Base Towards Culturally Aware Language Technologies
Weiyan Shi
Ryan Li
Yutong Zhang
Caleb Ziems
Chunhua yu
R. Horesh
Rogerio Abreu de Paula
Diyi Yang
39
26
0
23 Apr 2024
A Survey of Large Language Models on Generative Graph Analytics: Query, Learning, and Applications
Wenbo Shang
Xin Huang
36
9
0
23 Apr 2024
Automated Multi-Language to English Machine Translation Using Generative Pre-Trained Transformers
Elijah Pelofske
Vincent Urias
L. Liebrock
40
0
0
23 Apr 2024
From Matching to Generation: A Survey on Generative Information Retrieval
Xiaoxi Li
Jiajie Jin
Yujia Zhou
Yuyao Zhang
Peitian Zhang
Yutao Zhu
Zhicheng Dou
3DV
86
48
0
23 Apr 2024
Insights into Alignment: Evaluating DPO and its Variants Across Multiple Tasks
Amir Saeidi
Shivanshu Verma
Chitta Baral
Chitta Baral
ALM
43
23
0
23 Apr 2024
OpenELM: An Efficient Language Model Family with Open Training and Inference Framework
Sachin Mehta
Mohammad Hossein Sekhavat
Qingqing Cao
Maxwell Horton
Yanzi Jin
...
Iman Mirzadeh
Mahyar Najibi
Dmitry Belenko
Peter Zatloukal
Mohammad Rastegari
OSLM
AIFin
40
51
0
22 Apr 2024
Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data
Fahim Tajwar
Anika Singh
Archit Sharma
Rafael Rafailov
Jeff Schneider
Tengyang Xie
Stefano Ermon
Chelsea Finn
Aviral Kumar
46
108
0
22 Apr 2024
Self-Supervised Alignment with Mutual Information: Learning to Follow Principles without Preference Labels
Jan-Philipp Fränken
E. Zelikman
Rafael Rafailov
Kanishk Gandhi
Tobias Gerstenberg
Noah D. Goodman
39
10
0
22 Apr 2024
Detecting and Mitigating Hallucination in Large Vision Language Models via Fine-Grained AI Feedback
Wenyi Xiao
Ziwei Huang
Leilei Gan
Wanggui He
Haoyuan Li
Zhelun Yu
Hao Jiang
Fei Wu
Linchao Zhu
MLLM
45
27
0
22 Apr 2024
Protecting Your LLMs with Information Bottleneck
Zichuan Liu
Zefan Wang
Linjie Xu
Jinyu Wang
Lei Song
Tianchun Wang
Chunlin Chen
Wei Cheng
Jiang Bian
KELM
AAML
69
16
0
22 Apr 2024
Optimal Design for Human Feedback
Subhojyoti Mukherjee
Anusha Lalitha
Kousha Kalantari
Aniket Deshmukh
Ge Liu
Yifei Ma
Branislav Kveton
53
6
0
22 Apr 2024
Filtered Direct Preference Optimization
Tetsuro Morimura
Mitsuki Sakamoto
Yuu Jinnai
Kenshi Abe
Kaito Air
48
13
0
22 Apr 2024
AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs
Anselm Paulus
Arman Zharmagambetov
Chuan Guo
Brandon Amos
Yuandong Tian
AAML
63
56
0
21 Apr 2024
Mapping Social Choice Theory to RLHF
Jessica Dai
Eve Fleisig
40
12
0
19 Apr 2024
MM-PhyRLHF: Reinforcement Learning Framework for Multimodal Physics Question-Answering
Avinash Anand
Janak Kapuriya
Chhavi Kirtani
Apoorv Singh
Jay Saraf
Naman Lal
Jatin Kumar
A. Shivam
Astha Verma
R. Shah
OffRL
55
9
0
19 Apr 2024
Beyond Human Norms: Unveiling Unique Values of Large Language Models through Interdisciplinary Approaches
Pablo Biedma
Xiaoyuan Yi
Linus Huang
Maosong Sun
Xing Xie
PILM
45
3
0
19 Apr 2024
Relevant or Random: Can LLMs Truly Perform Analogical Reasoning?
Chengwei Qin
Wenhan Xia
Tan Wang
Fangkai Jiao
Yuchen Hu
Bosheng Ding
Ruirui Chen
Chenyu You
LRM
42
4
0
19 Apr 2024
Improving Automated Distractor Generation for Math Multiple-choice Questions with Overgenerate-and-rank
Alexander Scarlatos
Wanyong Feng
Digory Smith
Simon Woodhead
Andrew Lan
AI4Ed
28
4
0
19 Apr 2024
Evaluating AI for Law: Bridging the Gap with Open-Source Solutions
R. Bhambhoria
Samuel Dahan
Jonathan Li
Xiaodan Zhu
ELM
37
3
0
18 Apr 2024
Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment
Zhaofeng Wu
Ananth Balashankar
Yoon Kim
Jacob Eisenstein
Ahmad Beirami
46
14
0
18 Apr 2024
Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
Ye Tian
Baolin Peng
Linfeng Song
Lifeng Jin
Dian Yu
Haitao Mi
Dong Yu
LRM
ReLM
57
67
0
18 Apr 2024
OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of Instruction Data
Chandeepa Dissanayake
Lahiru Lowe
Sachith Gunasekara
Yasiru Ratnayake
MoE
ALM
42
1
0
18 Apr 2024
Token-level Direct Preference Optimization
Yongcheng Zeng
Guoqing Liu
Weiyu Ma
Ning Yang
Haifeng Zhang
Jun Wang
24
44
0
18 Apr 2024
Exploring the landscape of large language models: Foundations, techniques, and challenges
M. Moradi
Ke Yan
David Colwell
Matthias Samwald
Rhona Asgari
OffRL
46
1
0
18 Apr 2024
Aligning Language Models to Explicitly Handle Ambiguity
Sungmin Cho
Youna Kim
Cheonbok Park
Junyeob Kim
Choonghyun Park
Kang Min Yoo
Sang-goo Lee
Taeuk Kim
47
15
0
18 Apr 2024
AdvisorQA: Towards Helpful and Harmless Advice-seeking Question Answering with Collective Intelligence
Minbeom Kim
Hwanhee Lee
Joonsuk Park
Hwaran Lee
Kyomin Jung
45
2
0
18 Apr 2024
A Preference-driven Paradigm for Enhanced Translation with Large Language Models
D. Zhu
Sony Trenous
Xiaoyu Shen
Dietrich Klakow
Bill Byrne
Eva Hasler
81
3
0
17 Apr 2024
Stepwise Alignment for Constrained Language Model Policy Optimization
Akifumi Wachi
Thien Q. Tran
Rei Sato
Takumi Tanabe
Yohei Akimoto
47
5
0
17 Apr 2024
Procedural Dilemma Generation for Evaluating Moral Reasoning in Humans and Language Models
Jan-Philipp Fränken
Kanishk Gandhi
Tori Qiu
Ayesha Khawaja
Noah D. Goodman
Tobias Gerstenberg
ELM
45
1
0
17 Apr 2024
Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study
Shusheng Xu
Wei Fu
Jiaxuan Gao
Wenjie Ye
Weiling Liu
Zhiyu Mei
Guangju Wang
Chao Yu
Yi Wu
59
140
0
16 Apr 2024
Self-playing Adversarial Language Game Enhances LLM Reasoning
Pengyu Cheng
Tianhao Hu
Han Xu
Zhisong Zhang
Yong Dai
Lei Han
Nan Du
Nan Du
Xiaolong Li
SyDa
LRM
ReLM
98
30
0
16 Apr 2024
Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training
Masanori Hirano
Kentaro Imajo
CLL
32
1
0
16 Apr 2024
Self-Supervised Visual Preference Alignment
Ke Zhu
Liang Zhao
Zheng Ge
Xiangyu Zhang
45
12
0
16 Apr 2024
MEEL: Multi-Modal Event Evolution Learning
Zhengwei Tao
Zhi Jin
Junqiang Huang
Xiancai Chen
Xiaoying Bai
Haiyan Zhao
Yifan Zhang
Chongyang Tao
39
1
0
16 Apr 2024
Self-Explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards
Hyeonbin Hwang
Doyoung Kim
Seungone Kim
Seonghyeon Ye
Minjoon Seo
LRM
ReLM
50
15
0
16 Apr 2024
Enhancing Confidence Expression in Large Language Models Through Learning from Past Experience
Haixia Han
Tingyun Li
Shisong Chen
Jie Shi
Chengyu Du
Yanghua Xiao
Jiaqing Liang
Xin Lin
53
7
0
16 Apr 2024
Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback
Vincent Conitzer
Rachel Freedman
J. Heitzig
Wesley H. Holliday
Bob M. Jacobs
...
Eric Pacuit
Stuart Russell
Hailey Schoelkopf
Emanuel Tewolde
W. Zwicker
53
30
0
16 Apr 2024
Previous
1
2
3
...
40
41
42
...
51
52
53
Next