Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2405.13861
Cited By
Transformers Can Learn Temporal Difference Methods for In-Context Reinforcement Learning
22 May 2024
Jiuqi Wang
Ethan Blaser
Hadi Daneshmand
Shangtong Zhang
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Transformers Can Learn Temporal Difference Methods for In-Context Reinforcement Learning"
7 / 7 papers shown
Title
Do LLM Agents Have Regret? A Case Study in Online Learning and Games
Chanwoo Park
Xiangyu Liu
Asuman Ozdaglar
Kaiqing Zhang
75
17
0
25 Mar 2024
Can large language models explore in-context?
Akshay Krishnamurthy
Keegan Harris
Dylan J. Foster
Cyril Zhang
Aleksandrs Slivkins
LM&Ro
LLMAG
LRM
123
23
0
22 Mar 2024
Generalization to New Sequential Decision Making Tasks with In-Context Learning
Sharath Chandra Raparthy
Eric Hambro
Robert Kirk
Mikael Henaff
Roberta Raileanu
OffRL
111
21
0
06 Dec 2023
Do Transformers Parse while Predicting the Masked Word?
Haoyu Zhao
A. Panigrahi
Rong Ge
Sanjeev Arora
76
31
0
14 Mar 2023
Structured State Space Models for In-Context Reinforcement Learning
Chris Xiaoxuan Lu
Yannick Schroecker
Albert Gu
Emilio Parisotto
Jakob N. Foerster
Satinder Singh
Feryal M. P. Behbahani
AI4TS
97
82
0
07 Mar 2023
Large Language Models can Implement Policy Iteration
Ethan A. Brooks
Logan Walls
Richard L. Lewis
Satinder Singh
LM&Ro
OffRL
134
21
0
07 Oct 2022
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
344
11,684
0
09 Mar 2017
1