7
0

Sample and Computationally Efficient Continuous-Time Reinforcement Learning with General Function Approximation

Abstract

Continuous-time reinforcement learning (CTRL) provides a principled framework for sequential decision-making in environments where interactions evolve continuously over time. Despite its empirical success, the theoretical understanding of CTRL remains limited, especially in settings with general function approximation. In this work, we propose a model-based CTRL algorithm that achieves both sample and computational efficiency. Our approach leverages optimism-based confidence sets to establish the first sample complexity guarantee for CTRL with general function approximation, showing that a near-optimal policy can be learned with a suboptimality gap of O~(dR+dFN1/2)\tilde{O}(\sqrt{d_{\mathcal{R}} + d_{\mathcal{F}}}N^{-1/2}) using NN measurements, where dRd_{\mathcal{R}} and dFd_{\mathcal{F}} denote the distributional Eluder dimensions of the reward and dynamic functions, respectively, capturing the complexity of general function approximation in reinforcement learning. Moreover, we introduce structured policy updates and an alternative measurement strategy that significantly reduce the number of policy updates and rollouts while maintaining competitive sample efficiency. We implemented experiments to backup our proposed algorithms on continuous control tasks and diffusion model fine-tuning, demonstrating comparable performance with significantly fewer policy updates and rollouts.

View on arXiv
@article{zhao2025_2505.14821,
  title={ Sample and Computationally Efficient Continuous-Time Reinforcement Learning with General Function Approximation },
  author={ Runze Zhao and Yue Yu and Adams Yiyue Zhu and Chen Yang and Dongruo Zhou },
  journal={arXiv preprint arXiv:2505.14821},
  year={ 2025 }
}
Comments on this paper