ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.01265
  4. Cited By
Equivalence Between Wasserstein and Value-Aware Loss for Model-based
  Reinforcement Learning

Equivalence Between Wasserstein and Value-Aware Loss for Model-based Reinforcement Learning

1 June 2018
Kavosh Asadi
Evan Cater
Dipendra Kumar Misra
Michael L. Littman
    OffRL
ArXivPDFHTML

Papers citing "Equivalence Between Wasserstein and Value-Aware Loss for Model-based Reinforcement Learning"

4 / 4 papers shown
Title
Gradient-Aware Model-based Policy Search
Gradient-Aware Model-based Policy Search
P. DÓro
Alberto Maria Metelli
Andrea Tirinzoni
Matteo Papini
Marcello Restelli
29
34
0
09 Sep 2019
Combating the Compounding-Error Problem with a Multi-step Model
Combating the Compounding-Error Problem with a Multi-step Model
Kavosh Asadi
Dipendra Kumar Misra
Seungchan Kim
Michel L. Littman
LRM
16
55
0
30 May 2019
Towards a Simple Approach to Multi-step Model-based Reinforcement
  Learning
Towards a Simple Approach to Multi-step Model-based Reinforcement Learning
Kavosh Asadi
Evan Cater
Dipendra Kumar Misra
Michael L. Littman
OffRL
24
13
0
31 Oct 2018
Lipschitz Continuity in Model-based Reinforcement Learning
Lipschitz Continuity in Model-based Reinforcement Learning
Kavosh Asadi
Dipendra Kumar Misra
Michael L. Littman
KELM
43
150
0
19 Apr 2018
1