73
0

q-exponential family for policy optimization

Lingwei Zhu
Haseeb Shah
Han Wang
Yukie Nagai
Martha White
Abstract

Policy optimization methods benefit from a simple and tractable policy parametrization, usually the Gaussian for continuous action spaces. In this paper, we consider a broader policy family that remains tractable: the qq-exponential family. This family of policies is flexible, allowing the specification of both heavy-tailed policies (q>1q>1) and light-tailed policies (q<1q<1). This paper examines the interplay between qq-exponential policies for several actor-critic algorithms conducted on both online and offline problems. We find that heavy-tailed policies are more effective in general and can consistently improve on Gaussian. In particular, we find the Student's t-distribution to be more stable than the Gaussian across settings and that a heavy-tailed qq-Gaussian for Tsallis Advantage Weighted Actor-Critic consistently performs well in offline benchmark problems. Our code is available at \url{this https URL}.

View on arXiv
Comments on this paper