ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.06147
  4. Cited By
Reinforcement Learning in the Era of LLMs: What is Essential? What is
  needed? An RL Perspective on RLHF, Prompting, and Beyond

Reinforcement Learning in the Era of LLMs: What is Essential? What is needed? An RL Perspective on RLHF, Prompting, and Beyond

9 October 2023
Hao Sun
    OffRL
ArXiv (abs)PDFHTML

Papers citing "Reinforcement Learning in the Era of LLMs: What is Essential? What is needed? An RL Perspective on RLHF, Prompting, and Beyond"

28 / 28 papers shown
Title
Reinforcement Learning for Generative AI: A Survey
Reinforcement Learning for Generative AI: A Survey
Yuanjiang Cao
Quan.Z Sheng
Julian McAuley
Lina Yao
SyDa
149
13
0
28 Aug 2023
Direct Preference Optimization: Your Language Model is Secretly a Reward
  Model
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Rafael Rafailov
Archit Sharma
E. Mitchell
Stefano Ermon
Christopher D. Manning
Chelsea Finn
ALM
385
3,981
0
29 May 2023
RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment
RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment
Hanze Dong
Wei Xiong
Deepanshu Goyal
Yihan Zhang
Winnie Chow
Boyao Wang
Shizhe Diao
Jipeng Zhang
Kashun Shum
Tong Zhang
ALM
76
456
0
13 Apr 2023
RRHF: Rank Responses to Align Language Models with Human Feedback
  without tears
RRHF: Rank Responses to Align Language Models with Human Feedback without tears
Zheng Yuan
Hongyi Yuan
Chuanqi Tan
Wei Wang
Songfang Huang
Feiran Huang
ALM
159
374
0
11 Apr 2023
The Wisdom of Hindsight Makes Language Models Better Instruction
  Followers
The Wisdom of Hindsight Makes Language Models Better Instruction Followers
Tianjun Zhang
Fangchen Liu
Justin Wong
Pieter Abbeel
Joseph E. Gonzalez
73
45
0
10 Feb 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLMALM
877
12,973
0
04 Mar 2022
Rethinking Goal-conditioned Supervised Learning and Its Connection to
  Offline RL
Rethinking Goal-conditioned Supervised Learning and Its Connection to Offline RL
Rui Yang
Yiming Lu
Wenzhe Li
Hao Sun
Meng Fang
Yali Du
Xiu Li
Lei Han
Chongjie Zhang
OffRL
80
72
0
09 Feb 2022
Recurrent Model-Free RL Can Be a Strong Baseline for Many POMDPs
Recurrent Model-Free RL Can Be a Strong Baseline for Many POMDPs
Tianwei Ni
Benjamin Eysenbach
Ruslan Salakhutdinov
63
108
0
11 Oct 2021
Safe Exploration by Solving Early Terminated MDP
Safe Exploration by Solving Early Terminated MDP
Hao Sun
Ziping Xu
Meng Fang
Zhenghao Peng
Jiadong Guo
Bo Dai
Bolei Zhou
39
16
0
09 Jul 2021
Strictly Batch Imitation Learning by Energy-based Distribution Matching
Strictly Batch Imitation Learning by Energy-based Distribution Matching
Daniel Jarrett
Ioana Bica
M. Schaar
OffRL
59
62
0
25 Jun 2020
Zeroth-Order Supervised Policy Improvement
Zeroth-Order Supervised Policy Improvement
Hao Sun
Ziping Xu
Yuhang Song
Meng Fang
Jiechao Xiong
Bo Dai
Bolei Zhou
OffRL
33
9
0
11 Jun 2020
Conservative Q-Learning for Offline Reinforcement Learning
Conservative Q-Learning for Offline Reinforcement Learning
Aviral Kumar
Aurick Zhou
George Tucker
Sergey Levine
OffRLOnRL
140
1,824
0
08 Jun 2020
Novel Policy Seeking with Constrained Optimization
Novel Policy Seeking with Constrained Optimization
Hao Sun
Zhenghao Peng
Bo Dai
Jian Guo
Dahua Lin
Bolei Zhou
93
13
0
21 May 2020
Policy Continuation with Hindsight Inverse Dynamics
Policy Continuation with Hindsight Inverse Dynamics
Hao Sun
Zhizhong Li
Xiaotong Liu
Dahua Lin
Bolei Zhou
41
38
0
30 Oct 2019
When to Trust Your Model: Model-Based Policy Optimization
When to Trust Your Model: Model-Based Policy Optimization
Michael Janner
Justin Fu
Marvin Zhang
Sergey Levine
OffRL
98
952
0
19 Jun 2019
Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement
  Learning from Observations
Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations
Daniel S. Brown
Wonjoon Goo
P. Nagarajan
S. Niekum
73
357
0
12 Apr 2019
Soft Actor-Critic Algorithms and Applications
Soft Actor-Critic Algorithms and Applications
Tuomas Haarnoja
Aurick Zhou
Kristian Hartikainen
George Tucker
Sehoon Ha
...
Vikash Kumar
Henry Zhu
Abhishek Gupta
Pieter Abbeel
Sergey Levine
136
2,445
0
13 Dec 2018
Addressing Function Approximation Error in Actor-Critic Methods
Addressing Function Approximation Error in Actor-Critic Methods
Scott Fujimoto
H. V. Hoof
David Meger
OffRL
175
5,187
0
26 Feb 2018
Multi-Goal Reinforcement Learning: Challenging Robotics Environments and
  Request for Research
Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research
Matthias Plappert
Marcin Andrychowicz
Alex Ray
Bob McGrew
Bowen Baker
...
Joshua Tobin
Maciek Chociej
Peter Welinder
Vikash Kumar
Wojciech Zaremba
66
572
0
26 Feb 2018
Overcoming Exploration in Reinforcement Learning with Demonstrations
Overcoming Exploration in Reinforcement Learning with Demonstrations
Ashvin Nair
Bob McGrew
Marcin Andrychowicz
Wojciech Zaremba
Pieter Abbeel
OffRL
90
788
0
28 Sep 2017
Proximal Policy Optimization Algorithms
Proximal Policy Optimization Algorithms
John Schulman
Filip Wolski
Prafulla Dhariwal
Alec Radford
Oleg Klimov
OffRL
499
19,065
0
20 Jul 2017
Hindsight Experience Replay
Hindsight Experience Replay
Marcin Andrychowicz
Dwight Crow
Alex Ray
Jonas Schneider
Rachel Fong
Peter Welinder
Bob McGrew
Joshua Tobin
Pieter Abbeel
Wojciech Zaremba
OffRL
248
2,328
0
05 Jul 2017
Equivalence Between Policy Gradients and Soft Q-Learning
Equivalence Between Policy Gradients and Soft Q-Learning
John Schulman
Xi Chen
Pieter Abbeel
OffRL
89
346
0
21 Apr 2017
Reinforcement Learning with Deep Energy-Based Policies
Reinforcement Learning with Deep Energy-Based Policies
Tuomas Haarnoja
Haoran Tang
Pieter Abbeel
Sergey Levine
108
1,340
0
27 Feb 2017
Generative Adversarial Imitation Learning
Generative Adversarial Imitation Learning
Jonathan Ho
Stefano Ermon
GAN
149
3,115
0
10 Jun 2016
Trust Region Policy Optimization
Trust Region Policy Optimization
John Schulman
Sergey Levine
Philipp Moritz
Michael I. Jordan
Pieter Abbeel
277
6,776
0
19 Feb 2015
Playing Atari with Deep Reinforcement Learning
Playing Atari with Deep Reinforcement Learning
Volodymyr Mnih
Koray Kavukcuoglu
David Silver
Alex Graves
Ioannis Antonoglou
Daan Wierstra
Martin Riedmiller
127
12,231
0
19 Dec 2013
A Reduction of Imitation Learning and Structured Prediction to No-Regret
  Online Learning
A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning
Stéphane Ross
Geoffrey J. Gordon
J. Andrew Bagnell
OffRL
224
3,221
0
02 Nov 2010
1