ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.02155
  4. Cited By
Training language models to follow instructions with human feedback

Training language models to follow instructions with human feedback

4 March 2022
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
Pamela Mishkin
Chong Zhang
Sandhini Agarwal
Katarina Slama
Alex Ray
John Schulman
Jacob Hilton
Fraser Kelton
Luke E. Miller
Maddie Simens
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
    OSLM
    ALM
ArXivPDFHTML

Papers citing "Training language models to follow instructions with human feedback"

50 / 7,311 papers shown
Title
Prompt Inversion Attack against Collaborative Inference of Large Language Models
Prompt Inversion Attack against Collaborative Inference of Large Language Models
Wenjie Qu
Yuguang Zhou
Yongji Wu
Tingsong Xiao
Binhang Yuan
Heng Chang
Jiaheng Zhang
79
0
0
12 Mar 2025
Local Look-Ahead Guidance via Verifier-in-the-Loop for Automated Theorem Proving
Sara Rajaee
Kumar Pratik
Gabriele Cesa
Arash Behboodi
OffRL
LRM
63
0
0
12 Mar 2025
Teaching LLMs How to Learn with Contextual Fine-Tuning
Younwoo Choi
Muhammad Adil Asif
Ziwen Han
John Willes
Rahul G. Krishnan
LRM
41
0
0
12 Mar 2025
ReMA: Learning to Meta-think for LLMs with Multi-Agent Reinforcement Learning
Bo Liu
Yunxiang Li
Yangqiu Song
Hanjing Wang
Linyi Yang
Mark Schmidt
Jun Wang
Weinan Zhang
Shuyue Hu
Ying Wen
LLMAG
KELM
LRM
AI4CE
94
6
0
12 Mar 2025
Measure Twice, Cut Once: Grasping Video Structures and Event Semantics with LLMs for Video Temporal Localization
Zongshang Pang
Mayu Otani
Yuta Nakashima
63
0
0
12 Mar 2025
Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning
Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning
Bowen Jin
Hansi Zeng
Zhenrui Yue
Dong Wang
Sercan O. Arik
Dong Wang
Hamed Zamani
Jiawei Han
RALM
ReLM
KELM
OffRL
AI4TS
LRM
89
38
0
12 Mar 2025
BIMBA: Selective-Scan Compression for Long-Range Video Question Answering
Md. Mohaiminul Islam
Tushar Nagarajan
Huiyu Wang
Gedas Bertasius
Lorenzo Torresani
249
1
0
12 Mar 2025
Towards Large-scale Chemical Reaction Image Parsing via a Multimodal Large Language Model
Yufan Chen
Ching Ting Leung
Jianwei Sun
Yong Huang
Linyan Li
Hao Chen
Hanyu Gao
71
0
0
11 Mar 2025
LangTime: A Language-Guided Unified Model for Time Series Forecasting with Proximal Policy Optimization
Wenzhe Niu
Zongxia Xie
Yanru Sun
Wei He
Man Xu
Chao Hao
AI4TS
55
1
0
11 Mar 2025
Regulatory DNA sequence Design with Reinforcement Learning
Zhao-Qing Yang
Fuchun Sun
Chuan Cao
Ji-Rong Wen
63
0
0
11 Mar 2025
A Cascading Cooperative Multi-agent Framework for On-ramp Merging Control Integrating Large Language Models
Miao Zhang
Zhenlong Fang
Tianyi Wang
Qin Zhang
Shuai Lu
Junfeng Jiao
Tianyu Shi
AI4CE
69
4
0
11 Mar 2025
Dialogue Injection Attack: Jailbreaking LLMs through Context Manipulation
Wenlong Meng
Fan Zhang
Wendao Yao
Zhenyuan Guo
Yongqian Li
Chengkun Wei
Wenzhi Chen
AAML
40
2
0
11 Mar 2025
Group Preference Alignment: Customized LLM Response Generation from In-Situ Conversations
Ishani Mondal
Jack W. Stokes
S. Jauhar
Longqi Yang
Mengting Wan
Xiaofeng Xu
Xia Song
Jennifer Neville
55
0
0
11 Mar 2025
Near-Optimal Sample Complexity for Iterated CVaR Reinforcement Learning with a Generative Model
Near-Optimal Sample Complexity for Iterated CVaR Reinforcement Learning with a Generative Model
Zilong Deng
Simon Khan
Shaofeng Zou
61
0
0
11 Mar 2025
PromptLNet: Region-Adaptive Aesthetic Enhancement via Prompt Guidance in Low-Light Enhancement Net
Jun Yin
Yangfan He
Miao Zhang
Pengyu Zeng
Tianyi Wang
Shuai Lu
Xueqian Wang
DiffM
69
3
0
11 Mar 2025
Robust Multi-Objective Controlled Decoding of Large Language Models
Seongho Son
William Bankes
Sangwoong Yoon
Shyam Sundhar Ramesh
Xiaohang Tang
Ilija Bogunovic
46
0
0
11 Mar 2025
SegAgent: Exploring Pixel Understanding Capabilities in MLLMs by Imitating Human Annotator Trajectories
Muzhi Zhu
Yuzhuo Tian
Hao Chen
Chunluan Zhou
Qingpei Guo
Yongxu Liu
M. Yang
Chunhua Shen
MLLM
VLM
81
0
0
11 Mar 2025
Backtracking for Safety
Bilgehan Sel
Dingcheng Li
Phillip Wallis
Vaishakh Keshava
Ming Jin
Siddhartha Reddy Jonnalagadda
KELM
55
0
0
11 Mar 2025
EvalTree: Profiling Language Model Weaknesses via Hierarchical Capability Trees
Zhiyuan Zeng
Yizhong Wang
Hannaneh Hajishirzi
Pang Wei Koh
ELM
66
6
0
11 Mar 2025
Generating Robot Constitutions & Benchmarks for Semantic Safety
P. Sermanet
Anirudha Majumdar
A. Irpan
Dmitry Kalashnikov
Vikas Sindhwani
LM&Ro
68
1
0
11 Mar 2025
GTR: Guided Thought Reinforcement Prevents Thought Collapse in RL-based VLM Agent Training
Tong Wei
Yijun Yang
Junliang Xing
Yuanchun Shi
Zongqing Lu
Deheng Ye
OffRL
LRM
49
1
0
11 Mar 2025
Chemical reasoning in LLMs unlocks steerable synthesis planning and reaction mechanism elucidation
Andres M Bran
Theo A Neukomm
Daniel Armstrong
Zlatko Joncev
Philippe Schwaller
LRM
50
2
0
11 Mar 2025
Multimodal Generation of Animatable 3D Human Models with AvatarForge
Xinhang Liu
Yu-Wing Tai
Chi-Keung Tang
65
0
0
11 Mar 2025
Exploiting Instruction-Following Retrievers for Malicious Information Retrieval
Parishad BehnamGhader
Nicholas Meade
Siva Reddy
67
1
0
11 Mar 2025
Utilizing Jailbreak Probability to Attack and Safeguard Multimodal LLMs
Wenzhuo Xu
Zhipeng Wei
Xiongtao Sun
Deyue Zhang
Dongdong Yang
Quanchen Zou
Xinming Zhang
AAML
57
0
0
10 Mar 2025
Dynamic Path Navigation for Motion Agents with LLM Reasoning
Yubo Zhao
Qi Wu
Yifan Wang
Yu-Wing Tai
Chi-Keung Tang
LRM
LLMAG
262
0
0
10 Mar 2025
Uncovering Cross-Domain Recommendation Ability of Large Language Models
Xinyi Liu
Ruijie Wang
Dachun Sun
Dilek Hakkani-Tur
Tarek Abdelzaher
240
0
0
10 Mar 2025
Safety Guardrails for LLM-Enabled Robots
Zachary Ravichandran
Alexander Robey
Vijay Kumar
George Pappas
Hamed Hassani
66
2
0
10 Mar 2025
Unleashing the Potential of Large Language Models for Text-to-Image Generation through Autoregressive Representation Alignment
Unleashing the Potential of Large Language Models for Text-to-Image Generation through Autoregressive Representation Alignment
Xing Xie
Jiawei Liu
Ziyue Lin
Huijie Fan
Zhi Han
Yandong Tang
Liangqiong Qu
47
0
0
10 Mar 2025
Painting with Words: Elevating Detailed Image Captioning with Benchmark and Alignment Learning
Qinghao Ye
Xianhan Zeng
Fu Li
Chong Li
Haoqi Fan
CoGe
86
2
0
10 Mar 2025
VLRMBench: A Comprehensive and Challenging Benchmark for Vision-Language Reward Models
Jiacheng Ruan
Wenzhen Yuan
Xian Gao
Ye Guo
Daoxin Zhang
Zhe Xu
Yao Hu
Ting Liu
Yuzhuo Fu
LRM
VLM
72
4
0
10 Mar 2025
Think Before You Segment: High-Quality Reasoning Segmentation with GPT Chain of Thoughts
Think Before You Segment: High-Quality Reasoning Segmentation with GPT Chain of Thoughts
Shiu-hong Kao
Yu-Wing Tai
Chi-Keung Tang
LRM
MLLM
66
0
0
10 Mar 2025
UC-MOA: Utility-Conditioned Multi-Objective Alignment for Distributional Pareto-Optimality
UC-MOA: Utility-Conditioned Multi-Objective Alignment for Distributional Pareto-Optimality
Zelei Cheng
Xin-Qiang Cai
Yuting Tang
Pushi Zhang
Boming Yang
Masashi Sugiyama
Xinyu Xing
51
0
0
10 Mar 2025
AuthorMist: Evading AI Text Detectors with Reinforcement Learning
Isaac David
Arthur Gervais
DeLMO
57
0
0
10 Mar 2025
LMM-R1: Empowering 3B LMMs with Strong Reasoning Abilities Through Two-Stage Rule-Based RL
Yingzhe Peng
Gongrui Zhang
Miaosen Zhang
Zhiyuan You
Jie Liu
Qipeng Zhu
Kai Yang
Xingzhong Xu
Xin Geng
Xu Yang
LRM
ReLM
100
33
0
10 Mar 2025
XIFBench: Evaluating Large Language Models on Multilingual Instruction Following
Zhiyu Li
Kehai Chen
Yunfei Long
X. Bai
Yaoyin Zhang
Xuchen Wei
J. Li
Min Zhang
ELM
69
0
0
10 Mar 2025
VisRL: Intention-Driven Visual Perception via Reinforced Reasoning
VisRL: Intention-Driven Visual Perception via Reinforced Reasoning
Zhangquan Chen
Xufang Luo
Dongsheng Li
OffRL
LRM
75
3
0
10 Mar 2025
AlphaDrive: Unleashing the Power of VLMs in Autonomous Driving via Reinforcement Learning and Reasoning
Bo Jiang
Shaoyu Chen
Qian Zhang
Wenyu Liu
Xinggang Wang
OffRL
LRM
VLM
84
5
0
10 Mar 2025
RePO: ReLU-based Preference Optimization
Junkang Wu
Kexin Huang
Xue Wang
Jinyang Gao
Bolin Ding
Jiancan Wu
Xiangnan He
Xuben Wang
76
0
0
10 Mar 2025
Boosting the Generalization and Reasoning of Vision Language Models with Curriculum Reinforcement Learning
Huilin Deng
Ding Zou
Rui Ma
Hongchen Luo
Yang Cao
Yu Kang
LRM
VLM
62
6
0
10 Mar 2025
Graphormer-Guided Task Planning: Beyond Static Rules with LLM Safety Perception
Wanjing Huang
Tongjie Pan
Yalan Ye
52
2
0
10 Mar 2025
From Idea to Implementation: Evaluating the Influence of Large Language Models in Software Development -- An Opinion Paper
From Idea to Implementation: Evaluating the Influence of Large Language Models in Software Development -- An Opinion Paper
Sargam Yadav
Asifa Mehmood Qureshi
Abhishek Kaushik
Shubham Sharma
Roisin Loughran
...
. Nikhil Singh
Padraic O'Hara
Pranay Jaiswal
Roshan Chandru
David Lillis
61
1
0
10 Mar 2025
Mitigating Preference Hacking in Policy Optimization with Pessimism
Dhawal Gupta
Adam Fisch
Christoph Dann
Alekh Agarwal
76
0
0
10 Mar 2025
Hierarchical Balance Packing: Towards Efficient Supervised Fine-tuning for Long-Context LLM
Yongqiang Yao
Jingru Tan
Kaihuan Liang
Feizhao Zhang
Yazhe Niu
Jiahao Hu
Ruihao Gong
Dahua Lin
Ningyi Xu
65
0
0
10 Mar 2025
Towards Large Language Models that Benefit for All: Benchmarking Group Fairness in Reward Models
Kefan Song
Jin Yao
Runnan Jiang
Rohan Chandra
Shangtong Zhang
ALM
46
0
0
10 Mar 2025
DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs
Jongwoo Ko
Tianyi Chen
Sungnyun Kim
Tianyu Ding
Luming Liang
Ilya Zharkov
Se-Young Yun
VLM
240
0
0
10 Mar 2025
Alignment for Efficient Tool Calling of Large Language Models
Hongshen Xu
Zihan Wang
Zichen Zhu
Lei Pan
Xingyu Chen
Lu Chen
Kai Yu
51
0
0
09 Mar 2025
PerturboLLaVA: Reducing Multimodal Hallucinations with Perturbative Visual Training
Cong Chen
Mingyu Liu
Chenchen Jing
Y. Zhou
Fengyun Rao
Hao Chen
Bo Zhang
Chunhua Shen
MLLM
AAML
VLM
67
5
0
09 Mar 2025
Evaluating and Aligning Human Economic Risk Preferences in LLMs
Jiaheng Liu
Yi Yang
Kar Yan Tam
73
0
0
09 Mar 2025
PFDial: A Structured Dialogue Instruction Fine-tuning Method Based on UML Flowcharts
Ming Zhang
Yansen Wang
Yujiong Shen
Tingyi Yang
Changhao Jiang
...
Tao Liang
Guojun Ma
Qi Zhang
Tao Gui
Xuanjing Huang
63
0
0
09 Mar 2025
Previous
123...131415...145146147
Next