35
0
v1v2v3 (latest)

BASIL: Best-Action Symbolic Interpretable Learning for Evolving Compact RL Policies

Main:19 Pages
Bibliography:3 Pages
1 Tables
Abstract

The quest for interpretable reinforcement learning is a grand challenge for the deployment of autonomous decision-making systems in safety-critical applications. Modern deep reinforcement learning approaches, while powerful, tend to produce opaque policies that compromise verification, reduce transparency, and impede human oversight. To address this, we introduce BASIL (Best-Action Symbolic Interpretable Learning), a systematic approach for generating symbolic, rule-based policies via online evolutionary search with quality-diversity (QD) optimization. BASIL represents policies as ordered lists of symbolic predicates over state variables, ensuring full interpretability and tractable policy complexity. By using a QD archive, the methodology in the proposed study encourages behavioral and structural diversity between top-performing solutions, while a complexity-aware fitness encourages the synthesis of compact representations. The evolutionary system supports the use of exact constraints for rule count and system adaptability for balancing transparency with expressiveness. Empirical comparisons with three benchmark tasks CartPole-v1, MountainCar-v0, and Acrobot-v1 show that BASIL consistently synthesizes interpretable controllers with compact representations comparable to deep reinforcement learning baselines. Herein, this article introduces a new interpretable policy synthesis method that combines symbolic expressiveness, evolutionary diversity, and online learning through a unifying framework.

View on arXiv
@article{shahnazari2025_2506.00328,
  title={ BASIL: Best-Action Symbolic Interpretable Learning for Evolving Compact RL Policies },
  author={ Kourosh Shahnazari and Seyed Moein Ayyoubzadeh and Mohammadali Keshtparvar },
  journal={arXiv preprint arXiv:2506.00328},
  year={ 2025 }
}
Comments on this paper