ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.00456
  4. Cited By
Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human
  Preferences in Dialog

Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in Dialog

30 June 2019
Natasha Jaques
Asma Ghandeharioun
J. Shen
Craig Ferguson
Àgata Lapedriza
Noah J. Jones
S. Gu
Rosalind W. Picard
    OffRL
ArXivPDFHTML

Papers citing "Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in Dialog"

50 / 99 papers shown
Title
A Survey on Progress in LLM Alignment from the Perspective of Reward Design
A Survey on Progress in LLM Alignment from the Perspective of Reward Design
Miaomiao Ji
Yanqiu Wu
Zhibin Wu
Shoujin Wang
Jian Yang
Mark Dras
Usman Naseem
41
0
0
05 May 2025
Adversarial Training of Reward Models
Adversarial Training of Reward Models
Alexander Bukharin
Haifeng Qian
Shengyang Sun
Adithya Renduchintala
Soumye Singhal
Zihan Wang
Oleksii Kuchaiev
Olivier Delalleau
T. Zhao
AAML
32
0
0
08 Apr 2025
Mitigating Preference Hacking in Policy Optimization with Pessimism
Dhawal Gupta
Adam Fisch
Christoph Dann
Alekh Agarwal
76
0
0
10 Mar 2025
Can RLHF be More Efficient with Imperfect Reward Models? A Policy Coverage Perspective
Can RLHF be More Efficient with Imperfect Reward Models? A Policy Coverage Perspective
Jiawei Huang
Bingcong Li
Christoph Dann
Niao He
OffRL
85
0
0
26 Feb 2025
Beyond Reward Hacking: Causal Rewards for Large Language Model Alignment
Beyond Reward Hacking: Causal Rewards for Large Language Model Alignment
Chaoqi Wang
Zhuokai Zhao
Yibo Jiang
Zhaorun Chen
Chen Zhu
...
Jiayi Liu
Lizhu Zhang
Xiangjun Fan
Hao Ma
Sinong Wang
80
3
0
17 Jan 2025
OMG-RL:Offline Model-based Guided Reward Learning for Heparin Treatment
OMG-RL:Offline Model-based Guided Reward Learning for Heparin Treatment
Yooseok Lim
Sujee Lee
OffRL
150
0
0
03 Jan 2025
ACL-QL: Adaptive Conservative Level in Q-Learning for Offline Reinforcement Learning
ACL-QL: Adaptive Conservative Level in Q-Learning for Offline Reinforcement Learning
Kun Wu
Yinuo Zhao
Zhihao Xu
Zhengping Che
Chengxiang Yin
C. Liu
Qinru Qiu
Feiferi Feng
OffRL
102
1
0
22 Dec 2024
Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model
Q-Distribution guided Q-learning for offline reinforcement learning: Uncertainty penalized Q-value via consistency model
Jing Zhang
Linjiajie Fang
Kexin Shi
Wenjia Wang
Bing-Yi Jing
OffRL
36
0
0
27 Oct 2024
RL, but don't do anything I wouldn't do
RL, but don't do anything I wouldn't do
Michael K. Cohen
Marcus Hutter
Yoshua Bengio
Stuart J. Russell
OffRL
35
2
0
08 Oct 2024
Catastrophic Goodhart: regularizing RLHF with KL divergence does not
  mitigate heavy-tailed reward misspecification
Catastrophic Goodhart: regularizing RLHF with KL divergence does not mitigate heavy-tailed reward misspecification
Thomas Kwa
Drake Thomas
Adrià Garriga-Alonso
41
1
0
19 Jul 2024
Discovering Preference Optimization Algorithms with and for Large
  Language Models
Discovering Preference Optimization Algorithms with and for Large Language Models
Chris Xiaoxuan Lu
Samuel Holt
Claudio Fanconi
Alex J. Chan
Jakob Foerster
M. Schaar
R. T. Lange
OffRL
40
16
0
12 Jun 2024
Multi-objective Reinforcement learning from AI Feedback
Multi-objective Reinforcement learning from AI Feedback
Marcus Williams
46
1
0
11 Jun 2024
Teaching Language Models to Self-Improve by Learning from Language
  Feedback
Teaching Language Models to Self-Improve by Learning from Language Feedback
Chi Hu
Yimin Hu
Hang Cao
Tong Xiao
Jingbo Zhu
LRM
VLM
35
4
0
11 Jun 2024
Offline Regularised Reinforcement Learning for Large Language Models
  Alignment
Offline Regularised Reinforcement Learning for Large Language Models Alignment
Pierre Harvey Richemond
Yunhao Tang
Daniel Guo
Daniele Calandriello
M. G. Azar
...
Gil Shamir
Rishabh Joshi
Tianqi Liu
Rémi Munos
Bilal Piot
OffRL
46
23
0
29 May 2024
Performance-Aligned LLMs for Generating Fast Code
Performance-Aligned LLMs for Generating Fast Code
Daniel Nichols
Pranav Polasam
Harshitha Menon
Aniruddha Marathe
T. Gamblin
A. Bhatele
35
8
0
29 Apr 2024
HyperCLOVA X Technical Report
HyperCLOVA X Technical Report
Kang Min Yoo
Jaegeun Han
Sookyo In
Heewon Jeon
Jisu Jeong
...
Hyunkyung Noh
Se-Eun Choi
Sang-Woo Lee
Jung Hwa Lim
Nako Sung
VLM
37
8
0
02 Apr 2024
Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning
Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning
Lanqing Li
Hai Zhang
Xinyu Zhang
Shatong Zhu
Junqiao Zhao
Junqiao Zhao
Pheng-Ann Heng
OffRL
43
7
0
04 Feb 2024
HiBid: A Cross-Channel Constrained Bidding System with Budget Allocation
  by Hierarchical Offline Deep Reinforcement Learning
HiBid: A Cross-Channel Constrained Bidding System with Budget Allocation by Hierarchical Offline Deep Reinforcement Learning
Hao Wang
Bo Tang
Chi Harold Liu
Shangqin Mao
Jiahong Zhou
Zipeng Dai
Yaqi Sun
Qianlong Xie
Xingxing Wang
Dong Wang
OffRL
41
3
0
29 Dec 2023
Mitigating Open-Vocabulary Caption Hallucinations
Mitigating Open-Vocabulary Caption Hallucinations
Assaf Ben-Kish
Moran Yanuka
Morris Alper
Raja Giryes
Hadar Averbuch-Elor
MLLM
VLM
26
6
0
06 Dec 2023
Quality Diversity through Human Feedback: Towards Open-Ended
  Diversity-Driven Optimization
Quality Diversity through Human Feedback: Towards Open-Ended Diversity-Driven Optimization
Lijie Ding
Jenny Zhang
Jeff Clune
Lee Spector
Joel Lehman
EGVM
37
7
0
18 Oct 2023
OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
OpenChat: Advancing Open-source Language Models with Mixed-Quality Data
Guan-Bo Wang
Sijie Cheng
Xianyuan Zhan
Xiangang Li
Sen Song
Yang Liu
ALM
27
231
0
20 Sep 2023
Reinforcement Learning for Generative AI: A Survey
Reinforcement Learning for Generative AI: A Survey
Yuanjiang Cao
Quan.Z Sheng
Julian McAuley
Lina Yao
SyDa
50
10
0
28 Aug 2023
Secrets of RLHF in Large Language Models Part I: PPO
Secrets of RLHF in Large Language Models Part I: PPO
Rui Zheng
Shihan Dou
Songyang Gao
Yuan Hua
Wei Shen
...
Hang Yan
Tao Gui
Qi Zhang
Xipeng Qiu
Xuanjing Huang
ALM
OffRL
55
159
0
11 Jul 2023
Preference-grounded Token-level Guidance for Language Model Fine-tuning
Preference-grounded Token-level Guidance for Language Model Fine-tuning
Shentao Yang
Shujian Zhang
Congying Xia
Yihao Feng
Caiming Xiong
Mi Zhou
29
23
0
01 Jun 2023
Multimodal Web Navigation with Instruction-Finetuned Foundation Models
Multimodal Web Navigation with Instruction-Finetuned Foundation Models
Hiroki Furuta
Kuang-Huei Lee
Ofir Nachum
Yutaka Matsuo
Aleksandra Faust
S. Gu
Izzeddin Gur
LM&Ro
36
93
0
19 May 2023
Deep RL with Hierarchical Action Exploration for Dialogue Generation
Deep RL with Hierarchical Action Exploration for Dialogue Generation
Itsugun Cho
Ryota Takahashi
Yusaku Yanase
Hiroaki Saito
28
2
0
22 Mar 2023
Adaptive Policy Learning for Offline-to-Online Reinforcement Learning
Adaptive Policy Learning for Offline-to-Online Reinforcement Learning
Han Zheng
Xufang Luo
Pengfei Wei
Xuan Song
Dongsheng Li
Jing Jiang
OffRL
OnRL
18
21
0
14 Mar 2023
Behavior Proximal Policy Optimization
Behavior Proximal Policy Optimization
Zifeng Zhuang
Kun Lei
Jinxin Liu
Donglin Wang
Yilang Guo
OffRL
30
34
0
22 Feb 2023
Importance Weighted Actor-Critic for Optimal Conservative Offline
  Reinforcement Learning
Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning
Hanlin Zhu
Paria Rashidinejad
Jiantao Jiao
OffRL
38
15
0
30 Jan 2023
Constrained Policy Optimization with Explicit Behavior Density for
  Offline Reinforcement Learning
Constrained Policy Optimization with Explicit Behavior Density for Offline Reinforcement Learning
Jing Zhang
Chi Zhang
Wenjia Wang
Bing-Yi Jing
OffRL
35
7
0
28 Jan 2023
Human-in-the-loop Abstractive Dialogue Summarization
Human-in-the-loop Abstractive Dialogue Summarization
Jiaao Chen
Mohan Dodda
Diyi Yang
28
10
0
19 Dec 2022
KRLS: Improving End-to-End Response Generation in Task Oriented Dialog
  with Reinforced Keywords Learning
KRLS: Improving End-to-End Response Generation in Task Oriented Dialog with Reinforced Keywords Learning
Xiao Yu
Qingyang Wu
Kun Qian
Zhou Yu
OffRL
21
11
0
30 Nov 2022
Causal Deep Reinforcement Learning Using Observational Data
Causal Deep Reinforcement Learning Using Observational Data
Wenxuan Zhu
Chao Yu
Qiaosheng Zhang
CML
OffRL
26
5
0
28 Nov 2022
SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended
  Exploration
SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration
Giulia Vezzani
Dhruva Tirumala
Markus Wulfmeier
Dushyant Rao
A. Abdolmaleki
...
Tim Hertweck
Thomas Lampe
Fereshteh Sadeghi
N. Heess
Martin Riedmiller
OffRL
41
6
0
24 Nov 2022
Model-based Trajectory Stitching for Improved Offline Reinforcement
  Learning
Model-based Trajectory Stitching for Improved Offline Reinforcement Learning
Charles A. Hepburn
Giovanni Montana
OffRL
32
13
0
21 Nov 2022
Reward Gaming in Conditional Text Generation
Reward Gaming in Conditional Text Generation
Richard Yuanzhe Pang
Vishakh Padmakumar
Thibault Sellam
Ankur P. Parikh
He He
35
24
0
16 Nov 2022
Offline Reinforcement Learning with Adaptive Behavior Regularization
Offline Reinforcement Learning with Adaptive Behavior Regularization
Yunfan Zhou
Xijun Li
Qingyu Qu
OffRL
27
1
0
15 Nov 2022
The CRINGE Loss: Learning what language not to model
The CRINGE Loss: Learning what language not to model
Leonard Adolphs
Tianyu Gao
Jing Xu
Kurt Shuster
Sainbayar Sukhbaatar
Jason Weston
MU
28
35
0
10 Nov 2022
Wall Street Tree Search: Risk-Aware Planning for Offline Reinforcement
  Learning
Wall Street Tree Search: Risk-Aware Planning for Offline Reinforcement Learning
D. Elbaz
Gal Novik
Oren Salzman
OffRL
33
0
0
06 Nov 2022
Dual Generator Offline Reinforcement Learning
Dual Generator Offline Reinforcement Learning
Q. Vuong
Aviral Kumar
Sergey Levine
Yevgen Chebotar
OffRL
34
1
0
02 Nov 2022
Reinforcement Learning and Bandits for Speech and Language Processing:
  Tutorial, Review and Outlook
Reinforcement Learning and Bandits for Speech and Language Processing: Tutorial, Review and Outlook
Baihan Lin
OffRL
AI4TS
32
27
0
24 Oct 2022
The Pump Scheduling Problem: A Real-World Scenario for Reinforcement Learning
The Pump Scheduling Problem: A Real-World Scenario for Reinforcement Learning
Henrique Donancio
L. Vercouter
H. Roclawski
AI4CE
18
1
0
20 Oct 2022
Robust Offline Reinforcement Learning with Gradient Penalty and
  Constraint Relaxation
Robust Offline Reinforcement Learning with Gradient Penalty and Constraint Relaxation
Chengqian Gao
Kelvin Xu
Liu Liu
Deheng Ye
P. Zhao
Zhiqiang Xu
OffRL
45
2
0
19 Oct 2022
Boosting Offline Reinforcement Learning via Data Rebalancing
Boosting Offline Reinforcement Learning via Data Rebalancing
Yang Yue
Bingyi Kang
Xiao Ma
Zhongwen Xu
Gao Huang
Shuicheng Yan
OffRL
26
22
0
17 Oct 2022
S2P: State-conditioned Image Synthesis for Data Augmentation in Offline
  Reinforcement Learning
S2P: State-conditioned Image Synthesis for Data Augmentation in Offline Reinforcement Learning
Daesol Cho
D. Shim
H. J. Kim
OffRL
42
11
0
30 Sep 2022
Law Informs Code: A Legal Informatics Approach to Aligning Artificial
  Intelligence with Humans
Law Informs Code: A Legal Informatics Approach to Aligning Artificial Intelligence with Humans
John J. Nay
ELM
AILaw
88
27
0
14 Sep 2022
Dialogue Evaluation with Offline Reinforcement Learning
Dialogue Evaluation with Offline Reinforcement Learning
Nurul Lubis
Christian Geishauser
Hsien-Chin Lin
Carel van Niekerk
Michael Heck
Shutong Feng
Milica Gavsić
OffRL
27
4
0
02 Sep 2022
Multi-objective Optimization of Notifications Using Offline
  Reinforcement Learning
Multi-objective Optimization of Notifications Using Offline Reinforcement Learning
Prakruthi Prabhakar
Yiping Yuan
Guangyu Yang
Wensheng Sun
A. Muralidharan
OffRL
28
6
0
07 Jul 2022
Why is constrained neural language generation particularly challenging?
Why is constrained neural language generation particularly challenging?
Cristina Garbacea
Qiaozhu Mei
59
14
0
11 Jun 2022
On Reinforcement Learning and Distribution Matching for Fine-Tuning
  Language Models with no Catastrophic Forgetting
On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting
Tomasz Korbak
Hady ElSahar
Germán Kruszewski
Marc Dymetman
CLL
25
51
0
01 Jun 2022
12
Next