ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.09213
79
2

Enhancing Implicit Neural Representations via Symmetric Power Transformation

12 December 2024
Weixiang Zhang
Shuzhao Xie
Chengwei Ren
Shijia Ge
Mingzi Wang
Zhi Wang
ArXivPDFHTML
Abstract

We propose symmetric power transformation to enhance the capacity of Implicit Neural Representation~(INR) from the perspective of data transformation. Unlike prior work utilizing random permutation or index rearrangement, our method features a reversible operation that does not require additional storage consumption. Specifically, we first investigate the characteristics of data that can benefit the training of INR, proposing the Range-Defined Symmetric Hypothesis, which posits that specific range and symmetry can improve the expressive ability of INR. Based on this hypothesis, we propose a nonlinear symmetric power transformation to achieve both range-defined and symmetric properties simultaneously. We use the power coefficient to redistribute data to approximate symmetry within the target range. To improve the robustness of the transformation, we further design deviation-aware calibration and adaptive soft boundary to address issues of extreme deviation boosting and continuity breaking. Extensive experiments are conducted to verify the performance of the proposed method, demonstrating that our transformation can reliably improve INR compared with other data transformations. We also conduct 1D audio, 2D image and 3D video fitting tasks to demonstrate the effectiveness and applicability of our method.

View on arXiv
@article{zhang2025_2412.09213,
  title={ Enhancing Implicit Neural Representations via Symmetric Power Transformation },
  author={ Weixiang Zhang and Shuzhao Xie and Chengwei Ren and Shijia Ge and Mingzi Wang and Zhi Wang },
  journal={arXiv preprint arXiv:2412.09213},
  year={ 2025 }
}
Comments on this paper