29
2

Provably Efficient Exploration in Inverse Constrained Reinforcement Learning

Abstract

Optimizing objective functions subject to constraints is fundamental in many real-world applications. However, these constraints are often not readily defined and must be inferred from expert agent behaviors, a problem known as Inverse Constraint Inference. Inverse Constrained Reinforcement Learning (ICRL) is a common solver for recovering feasible constraints in complex environments, relying on training samples collected from interactive environments. However, the efficacy and efficiency of current sampling strategies remain unclear. We propose a strategic exploration framework for sampling with guaranteed efficiency to bridge this gap. By defining the feasible cost set for ICRL problems, we analyze how estimation errors in transition dynamics and the expert policy influence the feasibility of inferred constraints. Based on this analysis, we introduce two exploratory algorithms to achieve efficient constraint inference via 1) dynamically reducing the bounded aggregate error of cost estimations or 2) strategically constraining the exploration policy around plausibly optimal ones. Both algorithms are theoretically grounded with tractable sample complexity, and their performance is validated empirically across various environments.

View on arXiv
@article{yue2025_2409.15963,
  title={ Provably Efficient Exploration in Inverse Constrained Reinforcement Learning },
  author={ Bo Yue and Jian Li and Guiliang Liu },
  journal={arXiv preprint arXiv:2409.15963},
  year={ 2025 }
}
Comments on this paper