ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.07245
116
0
v1v2v3 (latest)

q-exponential family for policy optimization

14 August 2024
Lingwei Zhu
Haseeb Shah
Han Wang
Yukie Nagai
Martha White
    OffRL
ArXiv (abs)PDFHTML
Abstract

Policy optimization methods benefit from a simple and tractable policy parametrization, usually the Gaussian for continuous action spaces. In this paper, we consider a broader policy family that remains tractable: the qqq-exponential family. This family of policies is flexible, allowing the specification of both heavy-tailed policies (q>1q>1q>1) and light-tailed policies (q<1q<1q<1). This paper examines the interplay between qqq-exponential policies for several actor-critic algorithms conducted on both online and offline problems. We find that heavy-tailed policies are more effective in general and can consistently improve on Gaussian. In particular, we find the Student's t-distribution to be more stable than the Gaussian across settings and that a heavy-tailed qqq-Gaussian for Tsallis Advantage Weighted Actor-Critic consistently performs well in offline benchmark problems. Our code is available at \url{https://github.com/lingweizhu/qexp}.

View on arXiv
Comments on this paper