Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.02155
Cited By
Training language models to follow instructions with human feedback
4 March 2022
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
Pamela Mishkin
Chong Zhang
Sandhini Agarwal
Katarina Slama
Alex Ray
John Schulman
Jacob Hilton
Fraser Kelton
Luke E. Miller
Maddie Simens
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Training language models to follow instructions with human feedback"
50 / 6,390 papers shown
Title
Safeguarding LLM Embeddings in End-Cloud Collaboration via Entropy-Driven Perturbation
Shuaifan Jin
Xiaoyi Pang
Peng Kuang
He Wang
Jiacheng Du
Jiahui Hu
Kui Ren
SILM
AAML
132
0
0
17 Mar 2025
Reward-Instruct: A Reward-Centric Approach to Fast Photo-Realistic Image Generation
Yihong Luo
Tianyang Hu
Weijian Luo
Kenji Kawaguchi
Jing Tang
EGVM
470
0
0
17 Mar 2025
DLPO: Towards a Robust, Efficient, and Generalizable Prompt Optimization Framework from a Deep-Learning Perspective
Dengyun Peng
Yuhang Zhou
Qiguang Chen
Jinhao Liu
Jingjing Chen
L. Qin
110
0
0
17 Mar 2025
Superalignment with Dynamic Human Values
Florian Mai
David Kaczér
Nicholas Kluge Corrêa
Lucie Flek
140
0
0
17 Mar 2025
The Amazon Nova Family of Models: Technical Report and Model Card
Amazon AGI
Aaron Langford
A. Shah
Abhanshu Gupta
Abhimanyu Bhatter
...
Benjamin Biggs
Benjamin Ott
Bhanu Vinzamuri
Bharath Venkatesh
Bhavana Ganesh
30
21
0
17 Mar 2025
3D Human Interaction Generation: A Survey
Siyuan Fan
Wenke Huang
Xiantao Cai
Di Lin
VGen
116
0
0
17 Mar 2025
LazyMAR: Accelerating Masked Autoregressive Models via Feature Caching
Feihong Yan
Qingyan Wei
Jiayi Tang
Jiajun Li
Yidan Wang
Xuming Hu
Huiqi Li
Linfeng Zhang
97
2
0
16 Mar 2025
Towards Hierarchical Multi-Step Reward Models for Enhanced Reasoning in Large Language Models
Teng Wang
Zhangyi Jiang
Zhenqi He
Wenhan Yang
Yanan Zheng
Zeyu Li
Zifan He
Shenyang Tong
Hailei Gong
LRM
172
2
0
16 Mar 2025
Augmented Adversarial Trigger Learning
Zhe Wang
Yanjun Qi
96
0
0
16 Mar 2025
A Survey on the Optimization of Large Language Model-based Agents
Shangheng Du
Jiabao Zhao
Jinxin Shi
Zhentao Xie
Xin Jiang
Yanhong Bai
Liang He
LLMAG
LM&Ro
LM&MA
544
5
0
16 Mar 2025
The Lucie-7B LLM and the Lucie Training Dataset: Open resources for multilingual language generation
Olivier Gouvert
Julie Hunter
Jérôme Louradour
Christophe Cerisara
Evan Dufraisse
Yaya Sy
Laura Rivière
Jean-Pierre Lorré
OpenLLM-France community
459
0
0
15 Mar 2025
From Demonstrations to Rewards: Alignment Without Explicit Human Preferences
Siliang Zeng
Yao Liu
Huzefa Rangwala
George Karypis
Mingyi Hong
Rasool Fakoor
126
2
0
15 Mar 2025
Cognitive Activation and Chaotic Dynamics in Large Language Models: A Quasi-Lyapunov Analysis of Reasoning Mechanisms
Xiaojian Li
Yongkang Leng
Ruiqing Ding
Hangjie Mo
Shanlin Yang
LRM
80
1
0
15 Mar 2025
MT-RewardTree: A Comprehensive Framework for Advancing LLM-Based Machine Translation via Reward Modeling
Zhaopeng Feng
Jiahan Ren
Jiayuan Su
Jiamei Zheng
Zhihang Tang
Hongwei Wang
Zuozhu Liu
LRM
167
2
0
15 Mar 2025
Prompt Sentiment: The Catalyst for LLM Change
Vishal Gandhi
Sagar Gandhi
68
1
0
14 Mar 2025
LLaVA-MLB: Mitigating and Leveraging Attention Bias for Training-Free Video LLMs
Leqi Shen
Tao He
Guoqiang Gong
Fan Yang
Yuhui Zhang
Pengzhang Liu
Sicheng Zhao
Guiguang Ding
89
2
0
14 Mar 2025
Align in Depth: Defending Jailbreak Attacks via Progressive Answer Detoxification
Yingjie Zhang
Tong Liu
Zhe Zhao
Guozhu Meng
Kai Chen
AAML
109
1
0
14 Mar 2025
Agent-Enhanced Large Language Models for Researching Political Institutions
Joseph R. Loffredo
Suyeol Yun
LLMAG
105
0
0
14 Mar 2025
Monitoring Reasoning Models for Misbehavior and the Risks of Promoting Obfuscation
Bowen Baker
Joost Huizinga
Leo Gao
Zehao Dou
M. Guan
Aleksander Mądry
Wojciech Zaremba
J. Pachocki
David Farhi
LRM
188
39
0
14 Mar 2025
Implicit Bias-Like Patterns in Reasoning Models
Messi H.J. Lee
Calvin K. Lai
LRM
122
0
0
14 Mar 2025
Trust in Disinformation Narratives: a Trust in the News Experiment
Hanbyul Song
Miguel F. Santos Silva
Jaume Suau
Luis Espinosa-Anke
62
0
0
14 Mar 2025
reWordBench: Benchmarking and Improving the Robustness of Reward Models with Transformed Inputs
Zhaofeng Wu
Michihiro Yasunaga
Andrew Cohen
Yoon Kim
Asli Celikyilmaz
Marjan Ghazvininejad
92
3
0
14 Mar 2025
Safe-VAR: Safe Visual Autoregressive Model for Text-to-Image Generative Watermarking
Ziyi Wang
Songbai Tan
Gang Xu
Xuerui Qiu
Hongbin Xu
Xin Meng
Ming Li
Fei Richard Yu
WIGM
126
0
0
14 Mar 2025
D3: Diversity, Difficulty, and Dependability-Aware Data Selection for Sample-Efficient LLM Instruction Tuning
Jia Zhang
Chen-Xi Zhang
Yang Liu
Yi-Xuan Jin
Xiao-Wen Yang
Bo Zheng
Yi Liu
Lan-Zhe Guo
145
3
0
14 Mar 2025
Broaden your SCOPE! Efficient Multi-turn Conversation Planning for LLMs with Semantic Space
Zhiliang Chen
Xinyuan Niu
Chuan-Sheng Foo
Bryan Kian Hsiang Low
140
1
0
14 Mar 2025
PluralLLM: Pluralistic Alignment in LLMs via Federated Learning
Mahmoud Srewa
Tianyu Zhao
Salma Elmalaki
FedML
95
1
0
13 Mar 2025
Learning to Inference Adaptively for Multimodal Large Language Models
Zhuoyan Xu
Khoi Duc Nguyen
Preeti Mukherjee
Saurabh Bagchi
Somali Chaterji
Yingyu Liang
Yin Li
LRM
133
2
0
13 Mar 2025
DarkBench: Benchmarking Dark Patterns in Large Language Models
Esben Kran
Hieu Minh "Jord" Nguyen
Akash Kundu
Sami Jawhar
Jinsuk Park
Mateusz Maria Jurewicz
105
3
0
13 Mar 2025
Through the Magnifying Glass: Adaptive Perception Magnification for Hallucination-Free VLM Decoding
Shunqi Mao
Chaoyi Zhang
Weidong Cai
MLLM
469
1
0
13 Mar 2025
OASST-ETC Dataset: Alignment Signals from Eye-tracking Analysis of LLM Responses
Angela Lopez-Cardona
Sebastian Idesis
Miguel Barreda-Ángeles
Sergi Abadal
Ioannis Arapakis
144
0
0
13 Mar 2025
OR-LLM-Agent: Automating Modeling and Solving of Operations Research Optimization Problem with Reasoning Large Language Model
Bowen Zhang
Pengcheng Luo
LRM
AI4CE
LLMAG
133
2
0
13 Mar 2025
Fine-Tuning Diffusion Generative Models via Rich Preference Optimization
Hanyang Zhao
Haoxian Chen
Yucheng Guo
Genta Indra Winata
Tingting Ou
Ziyu Huang
D. Yao
Wenpin Tang
141
0
0
13 Mar 2025
Finetuning Generative Trajectory Model with Reinforcement Learning from Human Feedback
Derun Li
Jianwei Ren
Y. Wang
Xin Wen
Pengxiang Li
...
Zhongpu Xia
Peng Jia
Xianpeng Lang
Ningyi Xu
Hang Zhao
121
7
0
13 Mar 2025
RankPO: Preference Optimization for Job-Talent Matching
Yize Zhang
Ming Wang
Yu Wang
Xiaohui Wang
117
0
0
13 Mar 2025
New Trends for Modern Machine Translation with Large Reasoning Models
Sinuo Liu
Chenyang Lyu
Mingyang Wu
Longyue Wang
Weihua Luo
Kaifu Zhang
Zifu Shang
LRM
149
7
0
13 Mar 2025
Efficient Safety Alignment of Large Language Models via Preference Re-ranking and Representation-based Reward Modeling
Qiyuan Deng
X. Bai
Kehai Chen
Yaowei Wang
Liqiang Nie
Min Zhang
OffRL
123
0
0
13 Mar 2025
Ensemble Learning for Large Language Models in Text and Code Generation: A Survey
Mari Ashiga
Wei Jie
Fan Wu
Vardan K. Voskanyan
Fateme Dinmohammadi
P. Brookes
Jingzhi Gong
Zheng Wang
102
0
0
13 Mar 2025
Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning
Bowen Jin
Hansi Zeng
Zhenrui Yue
Dong Wang
Sercan O. Arik
Dong Wang
Hamed Zamani
Jiawei Han
RALM
ReLM
KELM
OffRL
AI4TS
LRM
236
122
0
12 Mar 2025
Teaching LLMs How to Learn with Contextual Fine-Tuning
Younwoo Choi
Muhammad Adil Asif
Ziwen Han
John Willes
Rahul G. Krishnan
LRM
99
2
0
12 Mar 2025
Theoretical Guarantees for High Order Trajectory Refinement in Generative Flows
Chengyue Gong
Xiaoyu Li
Yingyu Liang
Jiangxuan Long
Zhenmei Shi
Zhao Song
Yu Tian
140
3
0
12 Mar 2025
Measure Twice, Cut Once: Grasping Video Structures and Event Semantics with LLMs for Video Temporal Localization
Zongshang Pang
Mayu Otani
Yuta Nakashima
130
0
0
12 Mar 2025
HaploVL: A Single-Transformer Baseline for Multi-Modal Understanding
Rui Yang
Lin Song
Yicheng Xiao
Runhui Huang
Yixiao Ge
Ying Shan
Hengshuang Zhao
MLLM
111
3
0
12 Mar 2025
Prompt Inversion Attack against Collaborative Inference of Large Language Models
Wenjie Qu
Yuguang Zhou
Yongji Wu
Tingsong Xiao
Binhang Yuan
Yongbin Li
Jiaheng Zhang
135
0
0
12 Mar 2025
Local Look-Ahead Guidance via Verifier-in-the-Loop for Automated Theorem Proving
Sara Rajaee
Kumar Pratik
Gabriele Cesa
Arash Behboodi
OffRL
LRM
125
0
0
12 Mar 2025
Got Compute, but No Data: Lessons From Post-training a Finnish LLM
Elaine Zosa
Ville Komulainen
S. Pyysalo
101
1
0
12 Mar 2025
BIMBA: Selective-Scan Compression for Long-Range Video Question Answering
Md. Mohaiminul Islam
Tushar Nagarajan
Huiyu Wang
Gedas Bertasius
Lorenzo Torresani
524
2
0
12 Mar 2025
ReMA: Learning to Meta-think for LLMs with Multi-Agent Reinforcement Learning
Bo Liu
Yunxiang Li
Yangqiu Song
Hanjing Wang
Linyi Yang
...
Jun Wang
Jun Wang
Weinan Zhang
Shuyue Hu
Ying Wen
LLMAG
KELM
LRM
AI4CE
134
11
0
12 Mar 2025
Rethinking Prompt-based Debiasing in Large Language Models
Xinyi Yang
Runzhe Zhan
Derek F. Wong
Shu Yang
Junchao Wu
Lidia S. Chao
ALM
181
1
0
12 Mar 2025
Backtracking for Safety
Bilgehan Sel
Dingcheng Li
Phillip Wallis
Vaishakh Keshava
Ming Jin
Siddhartha Reddy Jonnalagadda
KELM
96
0
0
11 Mar 2025
SegAgent: Exploring Pixel Understanding Capabilities in MLLMs by Imitating Human Annotator Trajectories
Muzhi Zhu
Yuzhuo Tian
Hao Chen
Chunluan Zhou
Qingpei Guo
Yongxu Liu
M. Yang
Chunhua Shen
MLLM
VLM
124
1
0
11 Mar 2025
Previous
1
2
3
...
22
23
24
...
126
127
128
Next