ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.00822
  4. Cited By
Keeping Minimal Experience to Achieve Efficient Interpretable Policy
  Distillation

Keeping Minimal Experience to Achieve Efficient Interpretable Policy Distillation

2 March 2022
Xiao Liu
Shuyang Liu
Wenbin Li
Shangdong Yang
Yang Gao
    OffRL
ArXivPDFHTML

Papers citing "Keeping Minimal Experience to Achieve Efficient Interpretable Policy Distillation"

2 / 2 papers shown
Title
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel Kochenderfer
AAML
251
1,842
0
03 Feb 2017
Safe Exploration in Markov Decision Processes
Safe Exploration in Markov Decision Processes
T. Moldovan
Pieter Abbeel
78
308
0
22 May 2012
1