ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.04001
19
3

Polynomial Width is Sufficient for Set Representation with High-dimensional Features

8 July 2023
Peihao Wang
Shenghao Yang
Shu Li
Zhangyang Wang
Pan Li
ArXivPDFHTML
Abstract

Set representation has become ubiquitous in deep learning for modeling the inductive bias of neural networks that are insensitive to the input order. DeepSets is the most widely used neural network architecture for set representation. It involves embedding each set element into a latent space with dimension LLL, followed by a sum pooling to obtain a whole-set embedding, and finally mapping the whole-set embedding to the output. In this work, we investigate the impact of the dimension LLL on the expressive power of DeepSets. Previous analyses either oversimplified high-dimensional features to be one-dimensional features or were limited to analytic activations, thereby diverging from practical use or resulting in LLL that grows exponentially with the set size NNN and feature dimension DDD. To investigate the minimal value of LLL that achieves sufficient expressive power, we present two set-element embedding layers: (a) linear + power activation (LP) and (b) linear + exponential activations (LE). We demonstrate that LLL being poly(N,D)(N, D)(N,D) is sufficient for set representation using both embedding layers. We also provide a lower bound of LLL for the LP embedding layer. Furthermore, we extend our results to permutation-equivariant set functions and the complex field.

View on arXiv
Comments on this paper