Distribution Parameter Actor-Critic: Shifting the Agent-Environment Boundary for Diverse Action Spaces

We introduce a novel reinforcement learning (RL) framework that treats distribution parameters as actions, redefining the boundary between agent and environment. This reparameterization makes the new action space continuous, regardless of the original action type (discrete, continuous, mixed, etc.). Under this new parameterization, we develop a generalized deterministic policy gradient estimator, Distribution Parameter Policy Gradient (DPPG), which has lower variance than the gradient in the original action space. Although learning the critic over distribution parameters poses new challenges, we introduce interpolated critic learning (ICL), a simple yet effective strategy to enhance learning, supported by insights from bandit settings. Building on TD3, a strong baseline for continuous control, we propose a practical DPPG-based actor-critic algorithm, Distribution Parameter Actor-Critic (DPAC). Empirically, DPAC outperforms TD3 in MuJoCo continuous control tasks from OpenAI Gym and DeepMind Control Suite, and demonstrates competitive performance on the same environments with discretized action spaces.
View on arXiv@article{he2025_2506.16608, title={ Distribution Parameter Actor-Critic: Shifting the Agent-Environment Boundary for Diverse Action Spaces }, author={ Jiamin He and A. Rupam Mahmood and Martha White }, journal={arXiv preprint arXiv:2506.16608}, year={ 2025 } }