ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.02155
  4. Cited By
Training language models to follow instructions with human feedback

Training language models to follow instructions with human feedback

4 March 2022
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
Pamela Mishkin
Chong Zhang
Sandhini Agarwal
Katarina Slama
Alex Ray
John Schulman
Jacob Hilton
Fraser Kelton
Luke E. Miller
Maddie Simens
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
    OSLMALM
ArXiv (abs)PDFHTML

Papers citing "Training language models to follow instructions with human feedback"

50 / 6,395 papers shown
Title
Learning to Explore and Select for Coverage-Conditioned Retrieval-Augmented Generation
Learning to Explore and Select for Coverage-Conditioned Retrieval-Augmented Generation
Takyoung Kim
Kyungjae Lee
Y. Jang
Ji Yong Cho
Gangwoo Kim
Minseok Cho
Moontae Lee
290
1
0
28 Jan 2025
Using Large Language Models for education managements in Vietnamese with low resources
Duc Do Minh
Vinh Nguyen Van
Thang Dam Cong
102
1
0
28 Jan 2025
LiPO: Listwise Preference Optimization through Learning-to-Rank
LiPO: Listwise Preference Optimization through Learning-to-Rank
Tianqi Liu
Zhen Qin
Junru Wu
Jiaming Shen
Misha Khalman
...
Mohammad Saleh
Simon Baumgartner
Jialu Liu
Peter J. Liu
Xuanhui Wang
336
60
0
28 Jan 2025
Feasible Learning
Juan Ramirez
Ignacio Hounie
Juan Elenter
Jose Gallego-Posada
Meraj Hashemizadeh
Alejandro Ribeiro
Simon Lacoste-Julien
90
2
0
28 Jan 2025
Smoothed Embeddings for Robust Language Models
Smoothed Embeddings for Robust Language Models
Ryo Hase
Md Rafi Ur Rashid
Ashley Lewis
Jing Liu
T. Koike-Akino
K. Parsons
Yanjie Wang
AAML
123
2
0
27 Jan 2025
Complete Chess Games Enable LLM Become A Chess Master
Complete Chess Games Enable LLM Become A Chess Master
Yinqi Zhang
Xintian Han
Haolong Li
Kedi Chen
Shaohui Lin
ReLMELM
124
0
0
26 Jan 2025
Coordinating Ride-Pooling with Public Transit using Reward-Guided Conservative Q-Learning: An Offline Training and Online Fine-Tuning Reinforcement Learning Framework
Coordinating Ride-Pooling with Public Transit using Reward-Guided Conservative Q-Learning: An Offline Training and Online Fine-Tuning Reinforcement Learning Framework
Yulong Hu
Tingting Dong
Sen Li
OffRLOnRL
121
1
0
24 Jan 2025
Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes Domains
Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes Domains
Xu Chu
Zhijie Tan
Hanlin Xue
Guanyu Wang
Tong Mo
Weiping Li
LRMELM
123
3
0
24 Jan 2025
Improving Video Generation with Human Feedback
Improving Video Generation with Human Feedback
Jie Liu
Gongye Liu
Jiajun Liang
Ziyang Yuan
Xiaokun Liu
...
Pengfei Wan
Di Zhang
Kun Gai
Yujiu Yang
Wanli Ouyang
VGenEGVM
177
26
0
23 Jan 2025
ReasVQA: Advancing VideoQA with Imperfect Reasoning Process
ReasVQA: Advancing VideoQA with Imperfect Reasoning Process
Jianxin Liang
Xiaojun Meng
Huishuai Zhang
Yijiao Wang
Jiansheng Wei
Dongyan Zhao
LRM
77
2
0
23 Jan 2025
AgentRec: Agent Recommendation Using Sentence Embeddings Aligned to Human Feedback
AgentRec: Agent Recommendation Using Sentence Embeddings Aligned to Human Feedback
Joshua Park
Yongfeng Zhang
LLMAGLM&Ro
168
2
0
23 Jan 2025
RAMQA: A Unified Framework for Retrieval-Augmented Multi-Modal Question Answering
RAMQA: A Unified Framework for Retrieval-Augmented Multi-Modal Question Answering
Yang Bai
Christan Earl Grant
Daisy Zhe Wang
RALM
128
1
0
23 Jan 2025
HumorReject: Decoupling LLM Safety from Refusal Prefix via A Little Humor
HumorReject: Decoupling LLM Safety from Refusal Prefix via A Little Humor
Zihui Wu
Haichang Gao
Jiacheng Luo
Zhaoxiang Liu
155
0
0
23 Jan 2025
Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual Feedback
Test-Time Preference Optimization: On-the-Fly Alignment via Iterative Textual Feedback
Yafu Li
Xuyang Hu
Xiaoye Qu
Linjie Li
Yu Cheng
129
8
0
22 Jan 2025
WisdomBot: Tuning Large Language Models with Artificial Intelligence Knowledge
WisdomBot: Tuning Large Language Models with Artificial Intelligence Knowledge
Jingyuan Chen
Tao Wu
Wei Ji
Leilei Gan
88
0
0
22 Jan 2025
Adaptive Data Exploitation in Deep Reinforcement Learning
Adaptive Data Exploitation in Deep Reinforcement Learning
Mingqi Yuan
Bo Li
Xin Jin
Wenjun Zeng
OffRL
459
0
0
22 Jan 2025
Understanding the LLM-ification of CHI: Unpacking the Impact of LLMs at CHI through a Systematic Literature Review
Understanding the LLM-ification of CHI: Unpacking the Impact of LLMs at CHI through a Systematic Literature Review
Rock Yuren Pang
Hope Schroeder
Kynnedy Simone Smith
Solon Barocas
Ziang Xiao
Emily Tseng
Danielle Bragg
185
5
0
22 Jan 2025
NExtLong: Toward Effective Long-Context Training without Long Documents
NExtLong: Toward Effective Long-Context Training without Long Documents
Chaochen Gao
Xing Wu
Zijia Lin
Debing Zhang
Songlin Hu
SyDa
197
2
0
22 Jan 2025
Kimi k1.5: Scaling Reinforcement Learning with LLMs
Kimi k1.5: Scaling Reinforcement Learning with LLMs
Kimi Team
Angang Du
Bofei Gao
Bowei Xing
Changjiu Jiang
...
Zihao Huang
Ziyao Xu
Zhiyong Yang
Zonghan Yang
Zongyu Lin
OffRLALMAI4TSVLMLRM
363
338
0
22 Jan 2025
Improving Influence-based Instruction Tuning Data Selection for Balanced Learning of Diverse Capabilities
Improving Influence-based Instruction Tuning Data Selection for Balanced Learning of Diverse Capabilities
Qirun Dai
Dylan Zhang
Jiaqi W. Ma
Hao Peng
TDI
107
1
0
21 Jan 2025
From Drafts to Answers: Unlocking LLM Potential via Aggregation Fine-Tuning
From Drafts to Answers: Unlocking LLM Potential via Aggregation Fine-Tuning
Yafu Li
Zhilin Wang
Tingchen Fu
Ganqu Cui
Sen Yang
Yu Cheng
111
4
0
21 Jan 2025
A Survey on Memory-Efficient Large-Scale Model Training in AI for Science
A Survey on Memory-Efficient Large-Scale Model Training in AI for Science
Kaiyuan Tian
Linbo Qiao
Baihui Liu
Gongqingjian Jiang
Dongsheng Li
115
0
0
21 Jan 2025
InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model
InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model
Yuhang Zang
Xiaoyi Dong
Pan Zhang
Yuhang Cao
Ziyu Liu
...
Haodong Duan
Wentao Zhang
Kai Chen
Dahua Lin
Jiaqi Wang
VLM
264
25
0
21 Jan 2025
DiffDoctor: Diagnosing Image Diffusion Models Before Treating
DiffDoctor: Diagnosing Image Diffusion Models Before Treating
Yiyang Wang
Xi Chen
Xiaogang Xu
S. Ji
Yongxu Liu
Yujun Shen
Hengshuang Zhao
DiffM
157
0
0
21 Jan 2025
ImageRef-VL: Enabling Contextual Image Referencing in Vision-Language Models
ImageRef-VL: Enabling Contextual Image Referencing in Vision-Language Models
Jingwei Yi
Junhao Yin
Ju Xu
Peng Bao
Yansen Wang
Wei Fan
Haoran Wang
159
0
0
20 Jan 2025
T1: Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling
T1: Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling
Zhenyu Hou
Xin Lv
Rui Lu
Jing Zhang
Yongqian Li
Zijun Yao
Juanzi Li
J. Tang
Yuxiao Dong
OffRLLRMReLM
155
33
0
20 Jan 2025
Dialogue Benchmark Generation from Knowledge Graphs with Cost-Effective Retrieval-Augmented LLMs
Dialogue Benchmark Generation from Knowledge Graphs with Cost-Effective Retrieval-Augmented LLMs
Reham Omar
Omij Mangukiya
Essam Mansour
91
1
0
20 Jan 2025
Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models
Exploring Iterative Enhancement for Improving Learnersourced Multiple-Choice Question Explanations with Large Language Models
Qiming Bao
Juho Leinonen
A. Peng
Wanjun Zhong
Gaël Gendron
Tim Pistotti
Alice Huang
Paul Denny
Michael Witbrock
Jing Liu
AI4EdLRM
318
1
0
20 Jan 2025
Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates
Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt Templates
Kaifeng Lyu
Haoyu Zhao
Xinran Gu
Dingli Yu
Anirudh Goyal
Sanjeev Arora
ALM
133
59
0
20 Jan 2025
Agent Hospital: A Simulacrum of Hospital with Evolvable Medical Agents
Agent Hospital: A Simulacrum of Hospital with Evolvable Medical Agents
Junkai Li
Yunghwei Lai
Weitao Li
Jingyi Ren
Meng Zhang
...
Siyu Wang
Ziwei Sun
Yanzhe Zhang
Weizhi Ma
Yang Liu
LLMAGLM&MALM&RoMedIm
173
122
0
20 Jan 2025
QualityFlow: An Agentic Workflow for Program Synthesis Controlled by LLM Quality Checks
QualityFlow: An Agentic Workflow for Program Synthesis Controlled by LLM Quality Checks
Yaojie Hu
Qiang Zhou
Qihong Chen
Xiaopeng Li
Linbo Liu
Dejiao Zhang
Amit Kachroo
Talha Oz
Omer Tripp
181
7
0
20 Jan 2025
Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators
Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators
Yinhong Liu
Han Zhou
Zhijiang Guo
Ehsan Shareghi
Ivan Vulić
Anna Korhonen
Nigel Collier
ALM
228
83
0
20 Jan 2025
RELIEF: Reinforcement Learning Empowered Graph Feature Prompt Tuning
RELIEF: Reinforcement Learning Empowered Graph Feature Prompt Tuning
Jiapeng Zhu
Zichen Ding
Jianxiang Yu
Jiaqi Tan
Xiang Li
Weining Qian
OffRL
236
4
0
20 Jan 2025
RLPF: Reinforcement Learning from Prediction Feedback for User Summarization with LLMs
RLPF: Reinforcement Learning from Prediction Feedback for User Summarization with LLMs
Jiaxing Wu
Lin Ning
Luyang Liu
Harrison Lee
Neo Wu
Chao Wang
Sushant Prakash
S. O’Banion
Bradley Green
Jun Xie
201
1
0
20 Jan 2025
BoK: Introducing Bag-of-Keywords Loss for Interpretable Dialogue Response Generation
BoK: Introducing Bag-of-Keywords Loss for Interpretable Dialogue Response Generation
Suvodip Dey
M. Desarkar
OffRL
97
0
0
20 Jan 2025
Enhancing Semantic Consistency of Large Language Models through Model Editing: An Interpretability-Oriented Approach
Enhancing Semantic Consistency of Large Language Models through Model Editing: An Interpretability-Oriented Approach
J. Yang
Dapeng Chen
Yajing Sun
Rongjun Li
Zhiyong Feng
Wei Peng
131
8
0
19 Jan 2025
Generative Retrieval for Book search
Generative Retrieval for Book search
Yubao Tang
Ruqing Zhang
Jiafeng Guo
Maarten de Rijke
Shihao Liu
Shuaiqiang Wang
Dawei Yin
Xueqi Cheng
RALM
158
0
0
19 Jan 2025
MedFILIP: Medical Fine-grained Language-Image Pre-training
MedFILIP: Medical Fine-grained Language-Image Pre-training
Xinjie Liang
Xiangyu Li
Fanding Li
Jie Jiang
Qing Dong
Wei Wang
Kaidi Wang
Suyu Dong
Gongning Luo
Shuo Li
LM&MAVLMMedIm
177
4
0
18 Jan 2025
DriveLM: Driving with Graph Visual Question Answering
DriveLM: Driving with Graph Visual Question Answering
Chonghao Sima
Katrin Renz
Kashyap Chitta
Lawrence Yunliang Chen
Hanxue Zhang
Chengen Xie
Jens Beißwenger
Ping Luo
Andreas Geiger
Hongyang Li
305
208
0
17 Jan 2025
Playing Devil's Advocate: Unmasking Toxicity and Vulnerabilities in Large Vision-Language Models
Playing Devil's Advocate: Unmasking Toxicity and Vulnerabilities in Large Vision-Language Models
Abdulkadir Erol
Trilok Padhi
Agnik Saha
Ugur Kursuncu
Mehmet Emin Aktas
106
2
0
17 Jan 2025
Clone-Robust AI Alignment
Ariel D. Procaccia
Benjamin G. Schiffer
Shirley Zhang
50
3
0
17 Jan 2025
PeFoMed: Parameter Efficient Fine-tuning of Multimodal Large Language Models for Medical Imaging
PeFoMed: Parameter Efficient Fine-tuning of Multimodal Large Language Models for Medical Imaging
Gang Liu
Jinlong He
Pengfei Li
Genrong He
Zixu Zhao
Shenjun Zhong
LM&MA
167
3
0
17 Jan 2025
Autonomous Algorithm for Training Autonomous Vehicles with Minimal Human Intervention
Autonomous Algorithm for Training Autonomous Vehicles with Minimal Human Intervention
Sang-Hyun Lee
Daehyeok Kwon
Seung-Woo Seo
144
1
0
17 Jan 2025
Contrastive Policy Gradient: Aligning LLMs on sequence-level scores in a supervised-friendly fashion
Contrastive Policy Gradient: Aligning LLMs on sequence-level scores in a supervised-friendly fashion
Yannis Flet-Berliac
Nathan Grinsztajn
Florian Strub
Bill Wu
Eugene Choi
...
Arash Ahmadian
Yash Chandak
M. G. Azar
Olivier Pietquin
Matthieu Geist
OffRL
167
10
0
17 Jan 2025
A Simple Graph Contrastive Learning Framework for Short Text Classification
A Simple Graph Contrastive Learning Framework for Short Text Classification
Yuqi Liu
Fausto Giunchiglia
Lan Huang
Ximing Li
Xiaoyue Feng
Renchu Guan
126
0
0
17 Jan 2025
A Comprehensive Survey of Foundation Models in Medicine
A Comprehensive Survey of Foundation Models in Medicine
Wasif Khan
Seowung Leem
Kyle B. See
Joshua K. Wong
Shaoting Zhang
R. Fang
AI4CELM&MAVLM
302
27
0
17 Jan 2025
Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation
Provably Efficient Reinforcement Learning with Multinomial Logit Function Approximation
Long-Fei Li
Yu Zhang
Peng Zhao
Zhi Zhou
259
5
0
17 Jan 2025
A Survey of Research in Large Language Models for Electronic Design Automation
A Survey of Research in Large Language Models for Electronic Design Automation
Jingyu Pan
Guanglei Zhou
Chen-Chia Chang
Isaac Jacobson
Jiang Hu
Yuxiao Chen
135
5
0
17 Jan 2025
Can ChatGPT Overcome Behavioral Biases in the Financial Sector? Classify-and-Rethink: Multi-Step Zero-Shot Reasoning in the Gold Investment
Can ChatGPT Overcome Behavioral Biases in the Financial Sector? Classify-and-Rethink: Multi-Step Zero-Shot Reasoning in the Gold Investment
Shuoling Liu
Gaoguo Jia
Yuhang Jiang
Liyuan Chen
Qiang Yang
AIFinLRM
218
0
0
17 Jan 2025
Revisiting Rogers' Paradox in the Context of Human-AI Interaction
Revisiting Rogers' Paradox in the Context of Human-AI Interaction
Katherine M. Collins
Umang Bhatt
Ilia Sucholutsky
165
1
0
16 Jan 2025
Previous
123...323334...126127128
Next