ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.04225
32
1

Approximate Equivariance in Reinforcement Learning

6 November 2024
Jung Yeon Park
Sujay Bhatt
Sihan Zeng
Lawson L. S. Wong
Alec Koppel
Sumitra Ganesh
Robin Walters
ArXivPDFHTML
Abstract

Equivariant neural networks have shown great success in reinforcement learning, improving sample efficiency and generalization when there is symmetry in the task. However, in many problems, only approximate symmetry is present, which makes imposing exact symmetry inappropriate. Recently, approximately equivariant networks have been proposed for supervised classification and modeling physical systems. In this work, we develop approximately equivariant algorithms in reinforcement learning (RL). We define approximately equivariant MDPs and theoretically characterize the effect of approximate equivariance on the optimal QQQ function. We propose novel RL architectures using relaxed group and steerable convolutions and experiment on several continuous control domains and stock trading with real financial data. Our results demonstrate that the approximately equivariant network performs on par with exactly equivariant networks when exact symmetries are present, and outperforms them when the domains exhibit approximate symmetry. As an added byproduct of these techniques, we observe increased robustness to noise at test time. Our code is available atthis https URL.

View on arXiv
@article{park2025_2411.04225,
  title={ Approximate Equivariance in Reinforcement Learning },
  author={ Jung Yeon Park and Sujay Bhatt and Sihan Zeng and Lawson L.S. Wong and Alec Koppel and Sumitra Ganesh and Robin Walters },
  journal={arXiv preprint arXiv:2411.04225},
  year={ 2025 }
}
Comments on this paper