ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.18290
  4. Cited By
Direct Preference Optimization: Your Language Model is Secretly a Reward
  Model

Direct Preference Optimization: Your Language Model is Secretly a Reward Model

29 May 2023
Rafael Rafailov
Archit Sharma
E. Mitchell
Stefano Ermon
Christopher D. Manning
Chelsea Finn
    ALM
ArXivPDFHTML

Papers citing "Direct Preference Optimization: Your Language Model is Secretly a Reward Model"

50 / 2,611 papers shown
Title
Synergistic Weak-Strong Collaboration by Aligning Preferences
Synergistic Weak-Strong Collaboration by Aligning Preferences
Yizhu Jiao
Xuchao Zhang
Zhaoyang Wang
Yubo Ma
Zhun Deng
Rujia Wang
Chetan Bansal
Saravan Rajmohan
Jiawei Han
Huaxiu Yao
184
0
0
21 Apr 2025
Learning to Reason under Off-Policy Guidance
Learning to Reason under Off-Policy Guidance
Jianhao Yan
Yafu Li
Zican Hu
Zhi Wang
Ganqu Cui
Xiaoye Qu
Yu Cheng
Yue Zhang
OffRL
LRM
44
0
0
21 Apr 2025
DRAGON: Distributional Rewards Optimize Diffusion Generative Models
DRAGON: Distributional Rewards Optimize Diffusion Generative Models
Yatong Bai
Jonah Casebeer
Somayeh Sojoudi
Nicholas J. Bryan
DiffM
VLM
63
1
0
21 Apr 2025
FlowReasoner: Reinforcing Query-Level Meta-Agents
FlowReasoner: Reinforcing Query-Level Meta-Agents
Hongcheng Gao
Yue Liu
Yufei He
Longxu Dou
C. Du
Zhijie Deng
Bryan Hooi
Min Lin
Tianyu Pang
AIFin
LRM
29
1
0
21 Apr 2025
In-context Ranking Preference Optimization
In-context Ranking Preference Optimization
Junda Wu
Rohan Surana
Zhouhang Xie
Yiran Shen
Yu Xia
Tong Yu
Ryan Rossi
Prithviraj Ammanabrolu
Julian McAuley
40
0
0
21 Apr 2025
DSPO: Direct Semantic Preference Optimization for Real-World Image Super-Resolution
DSPO: Direct Semantic Preference Optimization for Real-World Image Super-Resolution
Miaomiao Cai
Simiao Li
Wei Li
X. Y. Huang
Hanting Chen
Jie Hu
Yunhe Wang
32
0
0
21 Apr 2025
Trillion 7B Technical Report
Trillion 7B Technical Report
Sungjun Han
Juyoung Suk
Suyeong An
Hyungguk Kim
Kyuseok Kim
Wonsuk Yang
Seungtaek Choi
Jamin Shin
164
1
0
21 Apr 2025
Integrating Symbolic Execution into the Fine-Tuning of Code-Generating LLMs
Integrating Symbolic Execution into the Fine-Tuning of Code-Generating LLMs
Marina Sakharova
Abhinav Anand
Mira Mezini
59
0
0
21 Apr 2025
MrGuard: A Multilingual Reasoning Guardrail for Universal LLM Safety
MrGuard: A Multilingual Reasoning Guardrail for Universal LLM Safety
Yahan Yang
Soham Dan
Shuo Li
Dan Roth
Insup Lee
LRM
36
0
0
21 Apr 2025
AlignRAG: Leveraging Critique Learning for Evidence-Sensitive Retrieval-Augmented Reasoning
AlignRAG: Leveraging Critique Learning for Evidence-Sensitive Retrieval-Augmented Reasoning
Jiaqi Wei
Hao Zhou
Xiang Zhang
Di Zhang
Zijie Qiu
Wei Wei
Jinzhe Li
Wanli Ouyang
Siqi Sun
34
0
0
21 Apr 2025
Text-to-Decision Agent: Learning Generalist Policies from Natural Language Supervision
Text-to-Decision Agent: Learning Generalist Policies from Natural Language Supervision
Shilin Zhang
Zican Hu
Wenhao Wu
Xinyi Xie
Jianxiang Tang
Chunlin Chen
Daoyi Dong
Yu Cheng
Zhenhong Sun
Zhi Wang
OffRL
190
0
0
21 Apr 2025
A Framework for Benchmarking and Aligning Task-Planning Safety in LLM-Based Embodied Agents
A Framework for Benchmarking and Aligning Task-Planning Safety in LLM-Based Embodied Agents
Yuting Huang
Leilei Ding
ZhiPeng Tang
Tianfu Wang
Xinrui Lin
Wenbo Zhang
Mingxiao Ma
Yanyong Zhang
LLMAG
40
0
0
20 Apr 2025
LeetCodeDataset: A Temporal Dataset for Robust Evaluation and Efficient Training of Code LLMs
LeetCodeDataset: A Temporal Dataset for Robust Evaluation and Efficient Training of Code LLMs
Yunhui Xia
Wei Shen
Yan Wang
Jason Klein Liu
Huifeng Sun
Siyue Wu
Jian Hu
Xiaolong Xu
AI4TS
30
1
0
20 Apr 2025
LoRe: Personalizing LLMs via Low-Rank Reward Modeling
LoRe: Personalizing LLMs via Low-Rank Reward Modeling
Avinandan Bose
Zhihan Xiong
Yuejie Chi
Simon S. Du
Lin Xiao
Maryam Fazel
33
0
0
20 Apr 2025
SUDO: Enhancing Text-to-Image Diffusion Models with Self-Supervised Direct Preference Optimization
SUDO: Enhancing Text-to-Image Diffusion Models with Self-Supervised Direct Preference Optimization
Liang Peng
Boxi Wu
Haoran Cheng
Yibo Zhao
Xiaofei He
36
0
0
20 Apr 2025
ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data
ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data
Tong Chen
Faeze Brahman
Jiacheng Liu
Niloofar Mireshghallah
Weijia Shi
Pang Wei Koh
Luke Zettlemoyer
Hannaneh Hajishirzi
40
0
0
20 Apr 2025
Towards NSFW-Free Text-to-Image Generation via Safety-Constraint Direct Preference Optimization
Towards NSFW-Free Text-to-Image Generation via Safety-Constraint Direct Preference Optimization
Shouwei Ruan
Zhenyu Wu
Yao Huang
Ruochen Zhang
Yitong Sun
Caixin Kang
Xingxing Wei
EGVM
53
0
0
19 Apr 2025
Direct Advantage Regression: Aligning LLMs with Online AI Reward
Direct Advantage Regression: Aligning LLMs with Online AI Reward
Li He
He Zhao
Stephen Wan
Dadong Wang
Lina Yao
Tongliang Liu
38
0
0
19 Apr 2025
TALES: Text Adventure Learning Environment Suite
TALES: Text Adventure Learning Environment Suite
Christopher Zhang Cui
Xingdi Yuan
Ziang Xiao
Prithviraj Ammanabrolu
Marc-Alexandre Côté
LLMAG
LRM
52
1
0
19 Apr 2025
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs
Jiliang Ni
Jiachen Pu
Zhongyi Yang
Kun Zhou
Hui Wang
Xiaoliang Xiao
Dakui Wang
Xin Li
Jingfeng Luo
Conggang Hu
39
0
0
18 Apr 2025
Continual Pre-Training is (not) What You Need in Domain Adaption
Continual Pre-Training is (not) What You Need in Domain Adaption
Pin-Er Chen
Da-Chen Lian
S. Hsieh
Sieh-Chuen Huang
Hsuan-Lei Shao
...
Yang-Hsien Lin
Zih-Ching Chen
Cheng-Kuang
Eddie TC Huang
Simon See
CLL
AILaw
67
1
0
18 Apr 2025
VideoPASTA: 7K Preference Pairs That Matter for Video-LLM Alignment
VideoPASTA: 7K Preference Pairs That Matter for Video-LLM Alignment
Yogesh Kulkarni
Pooyan Fazli
38
0
0
18 Apr 2025
Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?
Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?
Yang Yue
Zhiqi Chen
Rui Lu
Andrew Zhao
Zhaokai Wang
Yang Yue
Shiji Song
Gao Huang
ReLM
LRM
61
21
0
18 Apr 2025
DP2Unlearning: An Efficient and Guaranteed Unlearning Framework for LLMs
DP2Unlearning: An Efficient and Guaranteed Unlearning Framework for LLMs
Tamim Al Mahmud
N. Jebreel
Josep Domingo-Ferrer
David Sánchez
MU
32
0
0
18 Apr 2025
Prejudge-Before-Think: Enhancing Large Language Models at Test-Time by Process Prejudge Reasoning
Prejudge-Before-Think: Enhancing Large Language Models at Test-Time by Process Prejudge Reasoning
Jize Wang
Jin Jiang
Yang Liu
Hao Fei
Xunliang Cai
LRM
37
0
0
18 Apr 2025
Energy-Based Reward Models for Robust Language Model Alignment
Energy-Based Reward Models for Robust Language Model Alignment
Anamika Lochab
Ruqi Zhang
185
0
0
17 Apr 2025
Benchmarking Multi-National Value Alignment for Large Language Models
Benchmarking Multi-National Value Alignment for Large Language Models
Chengyi Ju
Weijie Shi
Chengzhong Liu
Yalan Qin
Jipeng Zhang
...
Jia Zhu
Jiajie Xu
Yaodong Yang
Sirui Han
Yike Guo
190
0
0
17 Apr 2025
An All-Atom Generative Model for Designing Protein Complexes
An All-Atom Generative Model for Designing Protein Complexes
Ruizhe Chen
Dongyu Xue
Xiangxin Zhou
Zaixiang Zheng
Xiangxiang Zeng
Quanquan Gu
26
0
0
17 Apr 2025
Low-hallucination Synthetic Captions for Large-Scale Vision-Language Model Pre-training
Low-hallucination Synthetic Captions for Large-Scale Vision-Language Model Pre-training
Xinsong Zhang
Yarong Zeng
Xinting Huang
Hu Hu
Runquan Xie
Han Hu
Zhanhui Kang
MLLM
VLM
55
0
0
17 Apr 2025
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization
FashionDPO:Fine-tune Fashion Outfit Generation Model using Direct Preference Optimization
Mingzhe Yu
Yunshan Ma
Lei Wu
Changshuo Wang
Xue Li
Lei Meng
EGVM
58
0
0
17 Apr 2025
VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models
VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models
Haojian Huang
Haodong Chen
Shengqiong Wu
Meng Luo
Jinlan Fu
Xinya Du
Hao Zhang
Hao Fei
AI4TS
202
1
0
17 Apr 2025
Persona-judge: Personalized Alignment of Large Language Models via Token-level Self-judgment
Persona-judge: Personalized Alignment of Large Language Models via Token-level Self-judgment
Xiaotian Zhang
Ruizhe Chen
Yang Feng
Zuozhu Liu
45
0
0
17 Apr 2025
VLMGuard-R1: Proactive Safety Alignment for VLMs via Reasoning-Driven Prompt Optimization
VLMGuard-R1: Proactive Safety Alignment for VLMs via Reasoning-Driven Prompt Optimization
Menglan Chen
Xianghe Pang
Jingjing Dong
Wenhao Wang
Yaxin Du
Siheng Chen
LRM
39
0
0
17 Apr 2025
Exploring Expert Failures Improves LLM Agent Tuning
Exploring Expert Failures Improves LLM Agent Tuning
Li-Cheng Lan
Andrew Bai
Minhao Cheng
Ruochen Wang
Cho-Jui Hsieh
LRM
207
0
0
17 Apr 2025
Science-T2I: Addressing Scientific Illusions in Image Synthesis
Science-T2I: Addressing Scientific Illusions in Image Synthesis
Jialuo Li
Wenhao Chai
Xingyu Fu
Haiyang Xu
Saining Xie
MedIm
45
0
0
17 Apr 2025
LAD-Reasoner: Tiny Multimodal Models are Good Reasoners for Logical Anomaly Detection
LAD-Reasoner: Tiny Multimodal Models are Good Reasoners for Logical Anomaly Detection
Weijia Li
Guanglei Chu
Jiong Chen
Guo-Sen Xie
Caifeng Shan
Fang Zhao
LRM
37
1
0
17 Apr 2025
Aligning Constraint Generation with Design Intent in Parametric CAD
Aligning Constraint Generation with Design Intent in Parametric CAD
Evan Casey
Tianyu Zhang
Shu Ishida
John Roger Thompson
Amir Hosein Khasahmadi
Joseph George Lambourne
P. Jayaraman
K. Willis
38
0
0
17 Apr 2025
Image-Editing Specialists: An RLAIF Approach for Diffusion Models
Image-Editing Specialists: An RLAIF Approach for Diffusion Models
Elior Benarous
Yilun Du
Heng Yang
24
0
0
17 Apr 2025
AnomalyR1: A GRPO-based End-to-end MLLM for Industrial Anomaly Detection
AnomalyR1: A GRPO-based End-to-end MLLM for Industrial Anomaly Detection
Yuhao Chao
Jie Liu
J. Tang
Gangshan Wu
37
1
0
16 Apr 2025
ToolRL: Reward is All Tool Learning Needs
ToolRL: Reward is All Tool Learning Needs
Cheng Qian
Emre Can Acikgoz
Qi He
Hongru Wang
Xiusi Chen
Dilek Hakkani-Tur
Gokhan Tur
Heng Ji
OffRL
LRM
38
7
0
16 Apr 2025
Evaluating the Diversity and Quality of LLM Generated Content
Evaluating the Diversity and Quality of LLM Generated Content
Alexander Shypula
Shuo Li
Botong Zhang
Vishakh Padmakumar
Kayo Yin
Osbert Bastani
53
1
0
16 Apr 2025
Self-alignment of Large Video Language Models with Refined Regularized Preference Optimization
Self-alignment of Large Video Language Models with Refined Regularized Preference Optimization
Pritam Sarkar
Ali Etemad
38
0
0
16 Apr 2025
Teaching Large Language Models to Reason through Learning and Forgetting
Teaching Large Language Models to Reason through Learning and Forgetting
Tianwei Ni
Allen Nie
Sapana Chaudhary
Yao Liu
Huzefa Rangwala
Rasool Fakoor
ReLM
CLL
LRM
192
0
0
15 Apr 2025
Diffusion Distillation With Direct Preference Optimization For Efficient 3D LiDAR Scene Completion
Diffusion Distillation With Direct Preference Optimization For Efficient 3D LiDAR Scene Completion
An Zhao
Shengyuan Zhang
Ling Yang
Z. Li
Jiale Wu
Haoran Xu
AnYang Wei
Perry Pengyun GU
Lingyun Sun
29
0
0
15 Apr 2025
REWARD CONSISTENCY: Improving Multi-Objective Alignment from a Data-Centric Perspective
REWARD CONSISTENCY: Improving Multi-Objective Alignment from a Data-Centric Perspective
Zhihao Xu
Yongqi Tong
Xin Zhang
Jun Zhou
Xiting Wang
40
0
0
15 Apr 2025
A Minimalist Approach to LLM Reasoning: from Rejection Sampling to Reinforce
A Minimalist Approach to LLM Reasoning: from Rejection Sampling to Reinforce
Wei Xiong
Jiarui Yao
Yuhui Xu
Bo Pang
Lei Wang
...
Junnan Li
Nan Jiang
Tong Zhang
Caiming Xiong
Hanze Dong
OffRL
LRM
48
10
0
15 Apr 2025
Mavors: Multi-granularity Video Representation for Multimodal Large Language Model
Mavors: Multi-granularity Video Representation for Multimodal Large Language Model
Yang Shi
Jiaheng Liu
Yushuo Guan
Zhikai Wu
Yujie Zhang
...
Bohan Zeng
Wei Zhang
Fuzheng Zhang
Wenjing Yang
Di Zhang
VGen
VLM
73
0
0
14 Apr 2025
Better Estimation of the KL Divergence Between Language Models
Better Estimation of the KL Divergence Between Language Models
Afra Amini
Tim Vieira
Ryan Cotterell
53
0
0
14 Apr 2025
Improving In-Context Learning with Reasoning Distillation
Improving In-Context Learning with Reasoning Distillation
Nafis Sadeq
Xin Xu
Zhouhang Xie
Julian McAuley
Byungkyu Kang
Prarit Lamba
Xiang Gao
RALM
ReLM
LRM
40
0
0
14 Apr 2025
Learning from Reference Answers: Versatile Language Model Alignment without Binary Human Preference Data
Learning from Reference Answers: Versatile Language Model Alignment without Binary Human Preference Data
Shuai Zhao
Linchao Zhu
Yi Yang
39
2
0
14 Apr 2025
Previous
123456...515253
Next