ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.04506
  4. Cited By
Combined Reinforcement Learning via Abstract Representations

Combined Reinforcement Learning via Abstract Representations

12 September 2018
Vincent François-Lavet
Yoshua Bengio
Doina Precup
Joelle Pineau
    OffRL
ArXivPDFHTML

Papers citing "Combined Reinforcement Learning via Abstract Representations"

15 / 15 papers shown
Title
Bridging State and History Representations: Understanding
  Self-Predictive RL
Bridging State and History Representations: Understanding Self-Predictive RL
Tianwei Ni
Benjamin Eysenbach
Erfan Seyedsalehi
Michel Ma
Clement Gehring
Aditya Mahajan
Pierre-Luc Bacon
AI4TS
AI4CE
17
20
0
17 Jan 2024
Disentangled (Un)Controllable Features
Disentangled (Un)Controllable Features
Jacob E. Kooi
Mark Hoogendoorn
Vincent François-Lavet
DRL
19
0
0
31 Oct 2022
Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning
  in Online Reinforcement Learning
Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning
Shuang Qiu
Lingxiao Wang
Chenjia Bai
Zhuoran Yang
Zhaoran Wang
SSL
OffRL
26
32
0
29 Jul 2022
Learning List-wise Representation in Reinforcement Learning for Ads
  Allocation with Multiple Auxiliary Tasks
Learning List-wise Representation in Reinforcement Learning for Ads Allocation with Multiple Auxiliary Tasks
Zehua Wang
Guogang Liao
Xiaowen Shi
Xiaoxu Wu
Chuheng Zhang
Yongkang Wang
Xingxing Wang
Dong Wang
OffRL
19
4
0
02 Apr 2022
Factored Adaptation for Non-Stationary Reinforcement Learning
Factored Adaptation for Non-Stationary Reinforcement Learning
Fan Feng
Biwei Huang
Kun Zhang
Sara Magliacane
CML
OffRL
42
32
0
30 Mar 2022
SAGE: Generating Symbolic Goals for Myopic Models in Deep Reinforcement
  Learning
SAGE: Generating Symbolic Goals for Myopic Models in Deep Reinforcement Learning
A. Chester
Michael Dann
Fabio Zambetta
John Thangarajah
8
0
0
09 Mar 2022
Component Transfer Learning for Deep RL Based on Abstract
  Representations
Component Transfer Learning for Deep RL Based on Abstract Representations
Geoffrey van Driessel
Vincent François-Lavet
DRL
OffRL
17
6
0
22 Nov 2021
Low-Dimensional State and Action Representation Learning with MDP
  Homomorphism Metrics
Low-Dimensional State and Action Representation Learning with MDP Homomorphism Metrics
N. Botteghi
M. Poel
B. Sirmaçek
C. Brune
16
3
0
04 Jul 2021
MICo: Improved representations via sampling-based state similarity for
  Markov decision processes
MICo: Improved representations via sampling-based state similarity for Markov decision processes
P. S. Castro
Tyler Kastner
Prakash Panangaden
Mark Rowland
26
35
0
03 Jun 2021
Learning First-Order Representations for Planning from Black-Box States:
  New Results
Learning First-Order Representations for Planning from Black-Box States: New Results
I. D. Rodriguez
Blai Bonet
J. Romero
Hector Geffner
NAI
17
21
0
23 May 2021
Combining Planning and Learning of Behavior Trees for Robotic Assembly
Combining Planning and Learning of Behavior Trees for Robotic Assembly
Jonathan Styrud
Matteo Iovino
M. Norrlöf
Mårten Björkman
Christian Smith
27
40
0
16 Mar 2021
Novelty Search in Representational Space for Sample Efficient
  Exploration
Novelty Search in Representational Space for Sample Efficient Exploration
Ruo Yu Tao
Vincent François-Lavet
Joelle Pineau
14
43
0
28 Sep 2020
State Action Separable Reinforcement Learning
State Action Separable Reinforcement Learning
Ziyao Zhang
Liang Ma
K. Leung
Konstantinos Poularakis
M. Srivatsa
15
2
0
05 Jun 2020
Bootstrap Latent-Predictive Representations for Multitask Reinforcement
  Learning
Bootstrap Latent-Predictive Representations for Multitask Reinforcement Learning
Z. Guo
Bernardo Avila-Pires
Bilal Piot
Jean-Bastien Grill
Florent Altché
Rémi Munos
M. G. Azar
BDL
DRL
SSL
27
139
0
30 Apr 2020
Imagined Value Gradients: Model-Based Policy Optimization with
  Transferable Latent Dynamics Models
Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models
Arunkumar Byravan
Jost Tobias Springenberg
A. Abdolmaleki
Roland Hafner
Michael Neunert
Thomas Lampe
Noah Y. Siegel
N. Heess
Martin Riedmiller
OffRL
6
41
0
09 Oct 2019
1