25
0
v1v2 (latest)

Inverse Q-Learning Done Right: Offline Imitation Learning in QπQ^π-Realizable MDPs

Main:9 Pages
3 Figures
Bibliography:3 Pages
1 Tables
Appendix:16 Pages
Abstract

We study the problem of offline imitation learning in Markov decision processes (MDPs), where the goal is to learn a well-performing policy given a dataset of state-action pairs generated by an expert policy. Complementing a recent line of work on this topic that assumes the expert belongs to a tractable class of known policies, we approach this problem from a new angle and leverage a different type of structural assumption about the environment. Specifically, for the class of linear QπQ^\pi-realizable MDPs, we introduce a new algorithm called saddle-point offline imitation learning (\SPOIL), which is guaranteed to match the performance of any expert up to an additive error ε\varepsilon with access to O(ε2)\mathcal{O}(\varepsilon^{-2}) samples. Moreover, we extend this result to possibly non-linear QπQ^\pi-realizable MDPs at the cost of a worse sample complexity of order O(ε4)\mathcal{O}(\varepsilon^{-4}). Finally, our analysis suggests a new loss function for training critic networks from expert data in deep imitation learning. Empirical evaluations on standard benchmarks demonstrate that the neural net implementation of \SPOIL is superior to behavior cloning and competitive with state-of-the-art algorithms.

View on arXiv
@article{moulin2025_2505.19946,
  title={ Inverse Q-Learning Done Right: Offline Imitation Learning in $Q^π$-Realizable MDPs },
  author={ Antoine Moulin and Gergely Neu and Luca Viano },
  journal={arXiv preprint arXiv:2505.19946},
  year={ 2025 }
}
Comments on this paper