ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.04266
29
3

There is no Accuracy-Interpretability Tradeoff in Reinforcement Learning for Mazes

9 June 2022
Yishay Mansour
Michal Moshkovitz
Cynthia Rudin
    FAtt
ArXivPDFHTML
Abstract

Interpretability is an essential building block for trustworthiness in reinforcement learning systems. However, interpretability might come at the cost of deteriorated performance, leading many researchers to build complex models. Our goal is to analyze the cost of interpretability. We show that in certain cases, one can achieve policy interpretability while maintaining its optimality. We focus on a classical problem from reinforcement learning: mazes with kkk obstacles in Rd\mathbb{R}^dRd. We prove the existence of a small decision tree with a linear function at each inner node and depth O(log⁡k+2d)O(\log k + 2^d)O(logk+2d) that represents an optimal policy. Note that for the interesting case of a constant ddd, we have O(log⁡k)O(\log k)O(logk) depth. Thus, in this setting, there is no accuracy-interpretability tradeoff. To prove this result, we use a new "compressing" technique that might be useful in additional settings.

View on arXiv
Comments on this paper