ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.11004
  4. Cited By
Auto-Encoding Adversarial Imitation Learning
v1v2v3v4v5 (latest)

Auto-Encoding Adversarial Imitation Learning

22 June 2022
Kaifeng Zhang
Rui Zhao
Ziming Zhang
Yang Gao
ArXiv (abs)PDFHTML

Papers citing "Auto-Encoding Adversarial Imitation Learning"

29 / 29 papers shown
Title
Watch and Match: Supercharging Imitation with Regularized Optimal
  Transport
Watch and Match: Supercharging Imitation with Regularized Optimal Transport
Siddhant Haldar
Vaibhav Mathur
Denis Yarats
Lerrel Pinto
93
66
0
30 Jun 2022
IQ-Learn: Inverse soft-Q Learning for Imitation
IQ-Learn: Inverse soft-Q Learning for Imitation
Divyansh Garg
Shuvam Chakraborty
Chris Cundy
Jiaming Song
Matthieu Geist
Stefano Ermon
86
188
0
23 Jun 2021
What Matters for Adversarial Imitation Learning?
What Matters for Adversarial Imitation Learning?
Manu Orsini
Anton Raichuk
Léonard Hussenot
Damien Vincent
Robert Dadashi
Sertan Girgin
Matthieu Geist
Olivier Bachem
Olivier Pietquin
Marcin Andrychowicz
97
77
0
01 Jun 2021
Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping
Learning to Utilize Shaping Rewards: A New Approach of Reward Shaping
Yujing Hu
Weixun Wang
Hangtian Jia
Yixiang Wang
Yingfeng Chen
Jianye Hao
Feng Wu
Changjie Fan
OffRL
90
178
0
05 Nov 2020
$f$-GAIL: Learning $f$-Divergence for Generative Adversarial Imitation
  Learning
fff-GAIL: Learning fff-Divergence for Generative Adversarial Imitation Learning
Xin Zhang
Jun Luo
Ziming Zhang
Zhi-Li Zhang
48
33
0
02 Oct 2020
When Will Generative Adversarial Imitation Learning Algorithms Attain
  Global Convergence
When Will Generative Adversarial Imitation Learning Algorithms Attain Global Convergence
Ziwei Guan
Tengyu Xu
Yingbin Liang
57
16
0
24 Jun 2020
Primal Wasserstein Imitation Learning
Primal Wasserstein Imitation Learning
Robert Dadashi
Léonard Hussenot
Matthieu Geist
Olivier Pietquin
74
129
0
08 Jun 2020
State-only Imitation with Transition Dynamics Mismatch
State-only Imitation with Transition Dynamics Mismatch
Tanmay Gangwani
Jian Peng
76
53
0
27 Feb 2020
Decision-Making with Auto-Encoding Variational Bayes
Decision-Making with Auto-Encoding Variational Bayes
Romain Lopez
Pierre Boyeau
Nir Yosef
Michael I. Jordan
Jeffrey Regier
BDL
510
10,591
0
17 Feb 2020
Imitation Learning via Off-Policy Distribution Matching
Imitation Learning via Off-Policy Distribution Matching
Ilya Kostrikov
Ofir Nachum
Jonathan Tompson
OODOffRL
158
205
0
10 Dec 2019
Reinforcement Learning from Imperfect Demonstrations under Soft Expert
  Guidance
Reinforcement Learning from Imperfect Demonstrations under Soft Expert Guidance
Mingxuan Jing
Xiaojian Ma
Wenbing Huang
F. Sun
Chao Yang
Bin Fang
Huaping Liu
63
60
0
16 Nov 2019
A Divergence Minimization Perspective on Imitation Learning Methods
A Divergence Minimization Perspective on Imitation Learning Methods
Seyed Kamyar Seyed Ghasemipour
R. Zemel
S. Gu
80
250
0
06 Nov 2019
Random Expert Distillation: Imitation Learning via Expert Policy Support
  Estimation
Random Expert Distillation: Imitation Learning via Expert Policy Support Estimation
Ruohan Wang
C. Ciliberto
P. Amadori
Y. Demiris
57
62
0
16 May 2019
Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement
  Learning from Observations
Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations
Daniel S. Brown
Wonjoon Goo
P. Nagarajan
S. Niekum
80
358
0
12 Apr 2019
Imitation Learning from Imperfect Demonstration
Imitation Learning from Imperfect Demonstration
Yueh-hua Wu
Nontawat Charoenphakdee
Han Bao
Voot Tangkaratt
Masashi Sugiyama
54
161
0
27 Jan 2019
Adversarial Imitation via Variational Inverse Reinforcement Learning
Adversarial Imitation via Variational Inverse Reinforcement Learning
A. H. Qureshi
Byron Boots
Michael C. Yip
58
61
0
17 Sep 2018
Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward
  Bias in Adversarial Imitation Learning
Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning
Ilya Kostrikov
Kumar Krishna Agrawal
Debidatta Dwibedi
Sergey Levine
Jonathan Tompson
95
260
0
09 Sep 2018
Distributed Distributional Deterministic Policy Gradients
Distributed Distributional Deterministic Policy Gradients
Gabriel Barth-Maron
Matthew W. Hoffman
David Budden
Will Dabney
Dan Horgan
TB Dhruva
Alistair Muldal
N. Heess
Timothy Lillicrap
OffRL
90
480
0
23 Apr 2018
Learning Robust Rewards with Adversarial Inverse Reinforcement Learning
Learning Robust Rewards with Adversarial Inverse Reinforcement Learning
Justin Fu
Katie Z Luo
Sergey Levine
129
757
0
30 Oct 2017
BEGAN: Boundary Equilibrium Generative Adversarial Networks
BEGAN: Boundary Equilibrium Generative Adversarial Networks
David Berthelot
Tom Schumm
Luke Metz
GAN
102
1,155
0
31 Mar 2017
Third-Person Imitation Learning
Third-Person Imitation Learning
Bradly C. Stadie
Pieter Abbeel
Ilya Sutskever
77
235
0
06 Mar 2017
Wasserstein GAN
Wasserstein GAN
Martín Arjovsky
Soumith Chintala
Léon Bottou
GAN
177
4,827
0
26 Jan 2017
Transfer from Simulation to Real World through Learning Deep Inverse
  Dynamics Model
Transfer from Simulation to Real World through Learning Deep Inverse Dynamics Model
Paul Christiano
Zain Shah
Igor Mordatch
Jonas Schneider
T. Blackwell
Joshua Tobin
Pieter Abbeel
Wojciech Zaremba
PINN
77
250
0
11 Oct 2016
Energy-based Generative Adversarial Network
Energy-based Generative Adversarial Network
Jiaqi Zhao
Michaël Mathieu
Yann LeCun
GAN
141
1,114
0
11 Sep 2016
Generative Adversarial Imitation Learning
Generative Adversarial Imitation Learning
Jonathan Ho
Stefano Ermon
GAN
159
3,119
0
10 Jun 2016
Autoencoding beyond pixels using a learned similarity metric
Autoencoding beyond pixels using a learned similarity metric
Anders Boesen Lindbo Larsen
Søren Kaae Sønderby
Hugo Larochelle
Ole Winther
GAN
175
2,073
0
31 Dec 2015
Learning Continuous Control Policies by Stochastic Value Gradients
Learning Continuous Control Policies by Stochastic Value Gradients
N. Heess
Greg Wayne
David Silver
Timothy Lillicrap
Yuval Tassa
Tom Erez
97
560
0
30 Oct 2015
High-Dimensional Continuous Control Using Generalized Advantage
  Estimation
High-Dimensional Continuous Control Using Generalized Advantage Estimation
John Schulman
Philipp Moritz
Sergey Levine
Michael I. Jordan
Pieter Abbeel
OffRL
129
3,438
0
08 Jun 2015
Trust Region Policy Optimization
Trust Region Policy Optimization
John Schulman
Sergey Levine
Philipp Moritz
Michael I. Jordan
Pieter Abbeel
279
6,796
0
19 Feb 2015
1