ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.21985
40
3

Improving Equivariant Networks with Probabilistic Symmetry Breaking

27 March 2025
Hannah Lawrence
Vasco Portilheiro
Yan Zhang
Sékou-Oumar Kaba
ArXivPDFHTML
Abstract

Equivariance encodes known symmetries into neural networks, often enhancing generalization. However, equivariant networks cannot break symmetries: the output of an equivariant network must, by definition, have at least the same self-symmetries as the input. This poses an important problem, both (1) for prediction tasks on domains where self-symmetries are common, and (2) for generative models, which must break symmetries in order to reconstruct from highly symmetric latent spaces. This fundamental limitation can be addressed by considering equivariant conditional distributions, instead of equivariant functions. We present novel theoretical results that establish necessary and sufficient conditions for representing such distributions. Concretely, this representation provides a practical framework for breaking symmetries in any equivariant network via randomized canonicalization. Our method, SymPE (Symmetry-breaking Positional Encodings), admits a simple interpretation in terms of positional encodings. This approach expands the representational power of equivariant networks while retaining the inductive bias of symmetry, which we justify through generalization bounds. Experimental results demonstrate that SymPE significantly improves performance of group-equivariant and graph neural networks across diffusion models for graphs, graph autoencoders, and lattice spin system modeling.

View on arXiv
@article{lawrence2025_2503.21985,
  title={ Improving Equivariant Networks with Probabilistic Symmetry Breaking },
  author={ Hannah Lawrence and Vasco Portilheiro and Yan Zhang and Sékou-Oumar Kaba },
  journal={arXiv preprint arXiv:2503.21985},
  year={ 2025 }
}
Comments on this paper