ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.09219
14
7

A Classification of GGG-invariant Shallow Neural Networks

18 May 2022
Devanshu Agrawal
James Ostrowski
ArXivPDFHTML
Abstract

When trying to fit a deep neural network (DNN) to a GGG-invariant target function with GGG a group, it only makes sense to constrain the DNN to be GGG-invariant as well. However, there can be many different ways to do this, thus raising the problem of ``GGG-invariant neural architecture design'': What is the optimal GGG-invariant architecture for a given problem? Before we can consider the optimization problem itself, we must understand the search space, the architectures in it, and how they relate to one another. In this paper, we take a first step towards this goal; we prove a theorem that gives a classification of all GGG-invariant single-hidden-layer or ``shallow'' neural network (GGG-SNN) architectures with ReLU activation for any finite orthogonal group GGG, and we prove a second theorem that characterizes the inclusion maps or ``network morphisms'' between the architectures that can be leveraged during neural architecture search (NAS). The proof is based on a correspondence of every GGG-SNN to a signed permutation representation of GGG acting on the hidden neurons; the classification is equivalently given in terms of the first cohomology classes of GGG, thus admitting a topological interpretation. The GGG-SNN architectures corresponding to nontrivial cohomology classes have, to our knowledge, never been explicitly identified in the literature previously. Using a code implementation, we enumerate the GGG-SNN architectures for some example groups GGG and visualize their structure. Finally, we prove that architectures corresponding to inequivalent cohomology classes coincide in function space only when their weight matrices are zero, and we discuss the implications of this for NAS.

View on arXiv
Comments on this paper