ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.09648
  4. Cited By
Prompt-Tuning Decision Transformer with Preference Ranking

Prompt-Tuning Decision Transformer with Preference Ranking

16 May 2023
Shengchao Hu
Li Shen
Ya Zhang
Dacheng Tao
    OffRL
ArXivPDFHTML

Papers citing "Prompt-Tuning Decision Transformer with Preference Ranking"

47 / 47 papers shown
Title
Zeroth-Order Optimization Meets Human Feedback: Provable Learning via
  Ranking Oracles
Zeroth-Order Optimization Meets Human Feedback: Provable Learning via Ranking Oracles
Zhiwei Tang
Dmitry Rybin
Tsung-Hui Chang
ALM
DiffM
88
29
0
07 Mar 2023
Graph Decision Transformer
Graph Decision Transformer
Shengchao Hu
Li Shen
Ya Zhang
Dacheng Tao
OffRL
67
16
0
07 Mar 2023
A Survey on Transformers in Reinforcement Learning
A Survey on Transformers in Reinforcement Learning
Wenzhe Li
Hao Luo
Zichuan Lin
Chongjie Zhang
Zongqing Lu
Deheng Ye
OffRL
MU
AI4CE
75
56
0
08 Jan 2023
Q-learning Decision Transformer: Leveraging Dynamic Programming for
  Conditional Sequence Modelling in Offline RL
Q-learning Decision Transformer: Leveraging Dynamic Programming for Conditional Sequence Modelling in Offline RL
Taku Yamagata
Ahmed Khalil
Raúl Santos-Rodríguez
OffRL
185
76
0
08 Sep 2022
Prompting Decision Transformer for Few-Shot Policy Generalization
Prompting Decision Transformer for Few-Shot Policy Generalization
Mengdi Xu
Songlin Yang
Shun Zhang
Yuchen Lu
Ding Zhao
J. Tenenbaum
Chuang Gan
OffRL
60
145
0
27 Jun 2022
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
RLPrompt: Optimizing Discrete Text Prompts with Reinforcement Learning
Mingkai Deng
Jianyu Wang
Cheng-Ping Hsieh
Yihan Wang
Han Guo
Tianmin Shu
Meng Song
Eric Xing
Zhiting Hu
82
340
0
25 May 2022
Training a Helpful and Harmless Assistant with Reinforcement Learning
  from Human Feedback
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Yuntao Bai
Andy Jones
Kamal Ndousse
Amanda Askell
Anna Chen
...
Jack Clark
Sam McCandlish
C. Olah
Benjamin Mann
Jared Kaplan
239
2,535
0
12 Apr 2022
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large
  Language Models
GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models
Archiki Prasad
Peter Hase
Xiang Zhou
Joey Tianyi Zhou
90
123
0
14 Mar 2022
Conditional Prompt Learning for Vision-Language Models
Conditional Prompt Learning for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VLM
CLIP
VPVLM
98
1,344
0
10 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
773
12,835
0
04 Mar 2022
Black-box Prompt Learning for Pre-trained Language Models
Black-box Prompt Learning for Pre-trained Language Models
Shizhe Diao
Zhichao Huang
Ruijia Xu
Xuechun Li
Yong Lin
Xiao Zhou
Tong Zhang
VLM
AAML
59
70
0
21 Jan 2022
Black-Box Tuning for Language-Model-as-a-Service
Black-Box Tuning for Language-Model-as-a-Service
Tianxiang Sun
Yunfan Shao
Hong Qian
Xuanjing Huang
Xipeng Qiu
VLM
127
266
0
10 Jan 2022
Generalized Decision Transformer for Offline Hindsight Information
  Matching
Generalized Decision Transformer for Offline Hindsight Information Matching
Hiroki Furuta
Y. Matsuo
S. Gu
OffRL
48
103
0
19 Nov 2021
PPT: Pre-trained Prompt Tuning for Few-shot Learning
PPT: Pre-trained Prompt Tuning for Few-shot Learning
Yuxian Gu
Xu Han
Zhiyuan Liu
Minlie Huang
VLM
84
416
0
09 Sep 2021
Learning to Prompt for Vision-Language Models
Learning to Prompt for Vision-Language Models
Kaiyang Zhou
Jingkang Yang
Chen Change Loy
Ziwei Liu
VPVLM
CLIP
VLM
459
2,394
0
02 Sep 2021
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods
  in Natural Language Processing
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
Pengfei Liu
Weizhe Yuan
Jinlan Fu
Zhengbao Jiang
Hiroaki Hayashi
Graham Neubig
VLM
SyDa
187
3,961
0
28 Jul 2021
Decision Transformer: Reinforcement Learning via Sequence Modeling
Decision Transformer: Reinforcement Learning via Sequence Modeling
Lili Chen
Kevin Lu
Aravind Rajeswaran
Kimin Lee
Aditya Grover
Michael Laskin
Pieter Abbeel
A. Srinivas
Igor Mordatch
OffRL
110
1,638
0
02 Jun 2021
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
514
4,021
0
18 Apr 2021
Factual Probing Is [MASK]: Learning vs. Learning to Recall
Factual Probing Is [MASK]: Learning vs. Learning to Recall
Zexuan Zhong
Dan Friedman
Danqi Chen
47
410
0
12 Apr 2021
GPT Understands, Too
GPT Understands, Too
Xiao Liu
Yanan Zheng
Zhengxiao Du
Ming Ding
Yujie Qian
Zhilin Yang
Jie Tang
VLM
155
1,173
0
18 Mar 2021
Learning Transferable Visual Models From Natural Language Supervision
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIP
VLM
826
29,341
0
26 Feb 2021
COMBO: Conservative Offline Model-Based Policy Optimization
COMBO: Conservative Offline Model-Based Policy Optimization
Tianhe Yu
Aviral Kumar
Rafael Rafailov
Aravind Rajeswaran
Sergey Levine
Chelsea Finn
OffRL
270
430
0
16 Feb 2021
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Prefix-Tuning: Optimizing Continuous Prompts for Generation
Xiang Lisa Li
Percy Liang
215
4,244
0
01 Jan 2021
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
555
40,739
0
22 Oct 2020
A One-bit, Comparison-Based Gradient Estimator
A One-bit, Comparison-Based Gradient Estimator
HanQin Cai
Daniel McKenzie
W. Yin
Zhenliang Zhang
82
17
0
06 Oct 2020
Mastering Atari with Discrete World Models
Mastering Atari with Discrete World Models
Danijar Hafner
Timothy Lillicrap
Mohammad Norouzi
Jimmy Ba
DRL
95
852
0
05 Oct 2020
Offline Meta-Reinforcement Learning with Advantage Weighting
Offline Meta-Reinforcement Learning with Advantage Weighting
E. Mitchell
Rafael Rafailov
Xue Bin Peng
Sergey Levine
Chelsea Finn
OffRL
62
105
0
13 Aug 2020
Conservative Q-Learning for Offline Reinforcement Learning
Conservative Q-Learning for Offline Reinforcement Learning
Aviral Kumar
Aurick Zhou
George Tucker
Sergey Levine
OffRL
OnRL
131
1,809
0
08 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
706
41,736
0
28 May 2020
MOPO: Model-based Offline Policy Optimization
MOPO: Model-based Offline Policy Optimization
Tianhe Yu
G. Thomas
Lantao Yu
Stefano Ermon
James Zou
Sergey Levine
Chelsea Finn
Tengyu Ma
OffRL
74
764
0
27 May 2020
MOReL : Model-Based Offline Reinforcement Learning
MOReL : Model-Based Offline Reinforcement Learning
Rahul Kidambi
Aravind Rajeswaran
Praneeth Netrapalli
Thorsten Joachims
OffRL
88
669
0
12 May 2020
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
542
2,018
0
04 May 2020
D4RL: Datasets for Deep Data-Driven Reinforcement Learning
D4RL: Datasets for Deep Data-Driven Reinforcement Learning
Justin Fu
Aviral Kumar
Ofir Nachum
George Tucker
Sergey Levine
GP
OffRL
210
1,364
0
15 Apr 2020
Keep Doing What Worked: Behavioral Modelling Priors for Offline
  Reinforcement Learning
Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning
Noah Y. Siegel
Jost Tobias Springenberg
Felix Berkenkamp
A. Abdolmaleki
Michael Neunert
Thomas Lampe
Roland Hafner
Nicolas Heess
Martin Riedmiller
OffRL
58
283
0
19 Feb 2020
A Simple Framework for Contrastive Learning of Visual Representations
A Simple Framework for Contrastive Learning of Visual Representations
Ting-Li Chen
Simon Kornblith
Mohammad Norouzi
Geoffrey E. Hinton
SSL
339
18,721
0
13 Feb 2020
Meta-World: A Benchmark and Evaluation for Multi-Task and Meta
  Reinforcement Learning
Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning
Tianhe Yu
Deirdre Quillen
Zhanpeng He
Ryan Julian
Avnish Narayan
Hayden Shively
Adithya Bellathur
Karol Hausman
Chelsea Finn
Sergey Levine
OffRL
224
1,165
0
24 Oct 2019
VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning
VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning
L. Zintgraf
K. Shiarlis
Maximilian Igl
Sebastian Schulze
Y. Gal
Katja Hofmann
Shimon Whiteson
OffRL
53
277
0
18 Oct 2019
Meta-Learning with Implicit Gradients
Meta-Learning with Implicit Gradients
Aravind Rajeswaran
Chelsea Finn
Sham Kakade
Sergey Levine
96
854
0
10 Sep 2019
Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction
Aviral Kumar
Justin Fu
George Tucker
Sergey Levine
OffRL
OnRL
109
1,054
0
03 Jun 2019
Off-Policy Deep Reinforcement Learning without Exploration
Off-Policy Deep Reinforcement Learning without Exploration
Scott Fujimoto
David Meger
Doina Precup
OffRL
BDL
210
1,604
0
07 Dec 2018
ProMP: Proximal Meta-Policy Search
ProMP: Proximal Meta-Policy Search
Jonas Rothfuss
Dennis Lee
I. Clavera
Tamim Asfour
Pieter Abbeel
55
210
0
16 Oct 2018
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.6K
94,729
0
11 Oct 2018
On First-Order Meta-Learning Algorithms
On First-Order Meta-Learning Algorithms
Alex Nichol
Joshua Achiam
John Schulman
221
2,232
0
08 Mar 2018
Addressing Function Approximation Error in Actor-Critic Methods
Addressing Function Approximation Error in Actor-Critic Methods
Scott Fujimoto
H. V. Hoof
David Meger
OffRL
169
5,178
0
26 Feb 2018
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement
  Learning with a Stochastic Actor
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
Tuomas Haarnoja
Aurick Zhou
Pieter Abbeel
Sergey Levine
287
8,313
0
04 Jan 2018
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
656
131,414
0
12 Jun 2017
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
806
11,866
0
09 Mar 2017
1