Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.02155
Cited By
Training language models to follow instructions with human feedback
4 March 2022
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
Pamela Mishkin
Chong Zhang
Sandhini Agarwal
Katarina Slama
Alex Ray
John Schulman
Jacob Hilton
Fraser Kelton
Luke E. Miller
Maddie Simens
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Training language models to follow instructions with human feedback"
50 / 4,604 papers shown
Title
LLM Sensitivity Evaluation Framework for Clinical Diagnosis
Chenwei Yan
Xiangling Fu
Yuxuan Xiong
Tianyi Wang
Siu Cheung Hui
Ji Wu
Xien Liu
LM&MA
ELM
35
0
0
18 Apr 2025
Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?
Yang Yue
Zhiqi Chen
Rui Lu
Andrew Zhao
Zhaokai Wang
Yang Yue
Shiji Song
Gao Huang
ReLM
LRM
58
13
0
18 Apr 2025
Science Hierarchography: Hierarchical Organization of Science Literature
Muhan Gao
Jash Shah
Weiqi Wang
Daniel Khashabi
31
0
0
18 Apr 2025
Prejudge-Before-Think: Enhancing Large Language Models at Test-Time by Process Prejudge Reasoning
J. T. Wang
Jin Jiang
Yang Liu
M. Zhang
Xunliang Cai
LRM
37
0
0
18 Apr 2025
Do Prompt Patterns Affect Code Quality? A First Empirical Assessment of ChatGPT-Generated Code
Antonio Della Porta
Stefano Lambiase
Fabio Palomba
19
0
0
18 Apr 2025
Remedy: Learning Machine Translation Evaluation from Human Preferences with Reward Modeling
Shaomu Tan
Christof Monz
42
0
0
18 Apr 2025
Not All Rollouts are Useful: Down-Sampling Rollouts in LLM Reinforcement Learning
Yixuan Even Xu
Yash Savani
Fei Fang
Zico Kolter
OffRL
42
2
0
18 Apr 2025
Analysing the Robustness of Vision-Language-Models to Common Corruptions
Muhammad Usama
Syeda Aishah Asim
Syed Bilal Ali
Syed Talal Wasim
Umair Bin Mansoor
VLM
36
0
0
18 Apr 2025
Image-Editing Specialists: An RLAIF Approach for Diffusion Models
Elior Benarous
Yilun Du
Heng Yang
22
0
0
17 Apr 2025
GraphAttack: Exploiting Representational Blindspots in LLM Safety Mechanisms
Sinan He
An Wang
35
0
0
17 Apr 2025
VLMGuard-R1: Proactive Safety Alignment for VLMs via Reasoning-Driven Prompt Optimization
Menglan Chen
Xianghe Pang
Jingjing Dong
Wenhao Wang
Yaxin Du
Siheng Chen
LRM
39
0
0
17 Apr 2025
NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation
Xiangyan Liu
Jinjie Ni
Zijian Wu
Chao Du
Longxu Dou
Haoran Wang
Tianyu Pang
Michael Shieh
OffRL
LRM
143
0
0
17 Apr 2025
SMPL-GPTexture: Dual-View 3D Human Texture Estimation using Text-to-Image Generation Models
Mingxiao Tu
Shuchang Ye
Hoijoon Jung
Jinman Kim
DiffM
32
0
0
17 Apr 2025
Data-efficient LLM Fine-tuning for Code Generation
Weijie Lv
X. Xia
Sheng-Jun Huang
ALM
SyDa
41
0
0
17 Apr 2025
Energy-Based Reward Models for Robust Language Model Alignment
Anamika Lochab
Ruqi Zhang
137
0
0
17 Apr 2025
Governance Challenges in Reinforcement Learning from Human Feedback: Evaluator Rationality and Reinforcement Stability
Dana Alsagheer
Abdulrahman Kamal
Mohammad Kamal
W. Shi
ALM
40
0
0
17 Apr 2025
Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo
João Loula
Benjamin LeBrun
Li Du
Ben Lipkin
Clemente Pasti
...
Ryan Cotterel
Vikash K. Mansinghka
Alexander K. Lew
Tim Vieira
Timothy J. O'Donnell
34
2
0
17 Apr 2025
Aligning Constraint Generation with Design Intent in Parametric CAD
Evan Casey
Tianyu Zhang
Shu Ishida
John Roger Thompson
Amir Hosein Khasahmadi
Joseph George Lambourne
P. Jayaraman
K. Willis
38
0
0
17 Apr 2025
MAIN: Mutual Alignment Is Necessary for instruction tuning
Fanyi Yang
Jianfeng Liu
Xiaotian Zhang
Haoyu Liu
Xixin Cao
Yuefeng Zhan
H. Sun
Weiwei Deng
Feng Sun
Qi Zhang
ALM
27
0
0
17 Apr 2025
Science-T2I: Addressing Scientific Illusions in Image Synthesis
Jialuo Li
Wenhao Chai
Xingyu Fu
Haiyang Xu
Saining Xie
MedIm
43
0
0
17 Apr 2025
Persona-judge: Personalized Alignment of Large Language Models via Token-level Self-judgment
Xiaotian Zhang
Ruizhe Chen
Yang Feng
Zuozhu Liu
42
0
0
17 Apr 2025
EarthGPT-X: Enabling MLLMs to Flexibly and Comprehensively Understand Multi-Source Remote Sensing Imagery
Wei Zhang
Miaoxin Cai
Yaqian Ning
T. Zhang
Yin Zhuang
He Chen
Jun Li
Xuerui Mao
36
0
0
17 Apr 2025
Design Topological Materials by Reinforcement Fine-Tuned Generative Model
Haosheng Xu
Dongheng Qian
Zhixuan Liu
Yadong Jiang
Jing Wang
34
1
0
17 Apr 2025
d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning
Siyan Zhao
Devaansh Gupta
Qinqing Zheng
Aditya Grover
DiffM
LRM
AI4CE
42
2
0
16 Apr 2025
An LLM-as-a-judge Approach for Scalable Gender-Neutral Translation Evaluation
Andrea Piergentili
Beatrice Savoldi
Matteo Negri
L. Bentivogli
ELM
37
0
0
16 Apr 2025
Multilingual Contextualization of Large Language Models for Document-Level Machine Translation
Miguel Moura Ramos
Patrick Fernandes
Sweta Agrawal
André F.T. Martins
68
0
0
16 Apr 2025
Evaluating the Diversity and Quality of LLM Generated Content
Alexander Shypula
Shuo Li
Botong Zhang
Vishakh Padmakumar
Kayo Yin
Osbert Bastani
50
1
0
16 Apr 2025
Can Pre-training Indicators Reliably Predict Fine-tuning Outcomes of LLMs?
Hansi Zeng
Kai Hui
Honglei Zhuang
Zhen Qin
Zhenrui Yue
Hamed Zamani
Dana Alon
35
0
0
16 Apr 2025
Reinforcing Compositional Retrieval: Retrieving Step-by-Step for Composing Informative Contexts
Quanyu Long
Jianda Chen
Zhengyuan Liu
Nancy F. Chen
Wenya Wang
Sinno Jialin Pan
KELM
RALM
LRM
134
0
0
15 Apr 2025
Dynamic Compressing Prompts for Efficient Inference of Large Language Models
Jinwu Hu
W. Zhang
Yufeng Wang
Yu Hu
Bin Xiao
Mingkui Tan
Qing Du
31
1
0
15 Apr 2025
A Minimalist Approach to LLM Reasoning: from Rejection Sampling to Reinforce
Wei Xiong
Jiarui Yao
Yuhui Xu
Bo Pang
Lei Wang
...
Junnan Li
Nan Jiang
Tong Zhang
Caiming Xiong
Hanze Dong
OffRL
LRM
45
6
0
15 Apr 2025
REWARD CONSISTENCY: Improving Multi-Objective Alignment from a Data-Centric Perspective
Zhihao Xu
Yongqi Tong
Xin Zhang
Jun Zhou
Xiting Wang
37
0
0
15 Apr 2025
ReTool: Reinforcement Learning for Strategic Tool Use in LLMs
Jiazhan Feng
Shijue Huang
Xingwei Qu
Ge Zhang
Yujia Qin
Baoquan Zhong
Chengquan Jiang
Jinxin Chi
Wanjun Zhong
OffRL
ReLM
SyDa
KELM
LRM
56
7
0
15 Apr 2025
ReZero: Enhancing LLM search ability by trying one-more-time
Alan Dao
Thinh Le
RALM
LRM
40
1
0
15 Apr 2025
Optimizing LLM Inference: Fluid-Guided Online Scheduling with Memory Constraints
Ruicheng Ao
Gan Luo
D. Simchi-Levi
Xinshang Wang
31
2
0
15 Apr 2025
DataSentinel: A Game-Theoretic Detection of Prompt Injection Attacks
Yupei Liu
Yuqi Jia
Jinyuan Jia
Dawn Song
Neil Zhenqiang Gong
AAML
41
0
0
15 Apr 2025
Diffusion Distillation With Direct Preference Optimization For Efficient 3D LiDAR Scene Completion
An Zhao
Shengyuan Zhang
Ling Yang
Z. Li
Jiale Wu
Haoran Xu
AnYang Wei
Perry Pengyun GU
Lingyun Sun
24
0
0
15 Apr 2025
Fine-Tuning Large Language Models on Quantum Optimization Problems for Circuit Generation
Linus Jern
Valter Uotila
Cong Yu
Bo Zhao
MQ
LRM
27
0
0
15 Apr 2025
Teaching Large Language Models to Reason through Learning and Forgetting
Tianwei Ni
Allen Nie
Sapana Chaudhary
Yao Liu
Huzefa Rangwala
Rasool Fakoor
ReLM
CLL
LRM
142
0
0
15 Apr 2025
DICE: A Framework for Dimensional and Contextual Evaluation of Language Models
Aryan Shrivastava
Paula Akemi Aoyagui
29
0
0
14 Apr 2025
Reasoning without Regret
Tarun Chitra
OffRL
LRM
35
0
0
14 Apr 2025
How Instruction and Reasoning Data shape Post-Training: Data Quality through the Lens of Layer-wise Gradients
Ming Li
Yongqian Li
Ziyue Li
Tianyi Zhou
LRM
27
1
0
14 Apr 2025
Improving In-Context Learning with Reasoning Distillation
Nafis Sadeq
Xin Xu
Zhouhang Xie
Julian McAuley
Byungkyu Kang
Prarit Lamba
Xiang Gao
RALM
ReLM
LRM
38
0
0
14 Apr 2025
CHARM: Calibrating Reward Models With Chatbot Arena Scores
Xiao Zhu
Chenmien Tan
Pinzhen Chen
Rico Sennrich
Yanlin Zhang
Hanxu Hu
ALM
26
0
0
14 Apr 2025
Better Estimation of the KL Divergence Between Language Models
Afra Amini
Tim Vieira
Ryan Cotterell
51
0
0
14 Apr 2025
Learning from Reference Answers: Versatile Language Model Alignment without Binary Human Preference Data
Shuai Zhao
Linchao Zhu
Yi Yang
39
2
0
14 Apr 2025
RealSafe-R1: Safety-Aligned DeepSeek-R1 without Compromising Reasoning Capability
Y. Zhang
Zihao Zeng
Dongbai Li
Yao Huang
Zhijie Deng
Yinpeng Dong
LRM
38
4
0
14 Apr 2025
OctGPT: Octree-based Multiscale Autoregressive Models for 3D Shape Generation
Si-Tong Wei
Rui-Huan Wang
Chuan-Zhi Zhou
Baoquan Chen
Peng-Shuai Wang
36
2
0
14 Apr 2025
Joint Action Language Modelling for Transparent Policy Execution
Theodor Wulff
R. S. Maharjan
Xinyun Chi
Angelo Cangelosi
29
0
0
14 Apr 2025
InstructEngine: Instruction-driven Text-to-Image Alignment
Xingyu Lu
Yihan Hu
Yichang Zhang
Kaiyu Jiang
Changyi Liu
...
Bin Wen
C. Yuan
Fan Yang
Tingting Gao
Di Zhang
48
0
0
14 Apr 2025
Previous
1
2
3
4
5
6
...
91
92
93
Next