ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.21166
42
0

Unveiling the Potential of Superexpressive Networks in Implicit Neural Representations

27 March 2025
Uvini Balasuriya Mudiyanselage
Woojin Cho
Minju Jo
Noseong Park
Kookjin Lee
ArXivPDFHTML
Abstract

In this study, we examine the potential of one of the ``superexpressive'' networks in the context of learning neural functions for representing complex signals and performing machine learning downstream tasks. Our focus is on evaluating their performance on computer vision and scientific machine learning tasks including signal representation/inverse problems and solutions of partial differential equations. Through an empirical investigation in various benchmark tasks, we demonstrate that superexpressive networks, as proposed by [Zhang et al. NeurIPS, 2022], which employ a specialized network structure characterized by having an additional dimension, namely width, depth, and ``height'', can surpass recent implicit neural representations that use highly-specialized nonlinear activation functions.

View on arXiv
@article{mudiyanselage2025_2503.21166,
  title={ Unveiling the Potential of Superexpressive Networks in Implicit Neural Representations },
  author={ Uvini Balasuriya Mudiyanselage and Woojin Cho and Minju Jo and Noseong Park and Kookjin Lee },
  journal={arXiv preprint arXiv:2503.21166},
  year={ 2025 }
}
Comments on this paper