ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.04270
32
16

A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis

9 June 2022
Damien Ferbach
Christos Tsirigotis
Gauthier Gidel
Avishek
A. Bose
ArXivPDFHTML
Abstract

The Strong Lottery Ticket Hypothesis (SLTH) stipulates the existence of a subnetwork within a sufficiently overparameterized (dense) neural network that -- when initialized randomly and without any training -- achieves the accuracy of a fully trained target network. Recent works by Da Cunha et. al 2022; Burkholz 2022 demonstrate that the SLTH can be extended to translation equivariant networks -- i.e. CNNs -- with the same level of overparametrization as needed for the SLTs in dense networks. However, modern neural networks are capable of incorporating more than just translation symmetry, and developing general equivariant architectures such as rotation and permutation has been a powerful design principle. In this paper, we generalize the SLTH to functions that preserve the action of the group GGG -- i.e. GGG-equivariant network -- and prove, with high probability, that one can approximate any GGG-equivariant network of fixed width and depth by pruning a randomly initialized overparametrized GGG-equivariant network to a GGG-equivariant subnetwork. We further prove that our prescribed overparametrization scheme is optimal and provides a lower bound on the number of effective parameters as a function of the error tolerance. We develop our theory for a large range of groups, including subgroups of the Euclidean E(2)\text{E}(2)E(2) and Symmetric group G≤SnG \leq \mathcal{S}_nG≤Sn​ -- allowing us to find SLTs for MLPs, CNNs, E(2)\text{E}(2)E(2)-steerable CNNs, and permutation equivariant networks as specific instantiations of our unified framework. Empirically, we verify our theory by pruning overparametrized E(2)\text{E}(2)E(2)-steerable CNNs, kkk-order GNNs, and message passing GNNs to match the performance of trained target networks.

View on arXiv
Comments on this paper