ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.14471
37
0

Efficient Implicit Neural Compression of Point Clouds via Learnable Activation in Latent Space

20 April 2025
Yichi Zhang
Qianqian Yang
    3DPC
    AI4CE
ArXivPDFHTML
Abstract

Implicit Neural Representations (INRs), also known as neural fields, have emerged as a powerful paradigm in deep learning, parameterizing continuous spatial fields using coordinate-based neural networks. In this paper, we propose \textbf{PICO}, an INR-based framework for static point cloud compression. Unlike prevailing encoder-decoder paradigms, we decompose the point cloud compression task into two separate stages: geometry compression and attribute compression, each with distinct INR optimization objectives. Inspired by Kolmogorov-Arnold Networks (KANs), we introduce a novel network architecture, \textbf{LeAFNet}, which leverages learnable activation functions in the latent space to better approximate the target signal's implicit function. By reformulating point cloud compression as neural parameter compression, we further improve compression efficiency through quantization and entropy coding. Experimental results demonstrate that \textbf{LeAFNet} outperforms conventional MLPs in INR-based point cloud compression. Furthermore, \textbf{PICO} achieves superior geometry compression performance compared to the current MPEG point cloud compression standard, yielding an average improvement of 4.924.924.92 dB in D1 PSNR. In joint geometry and attribute compression, our approach exhibits highly competitive results, with an average PCQM gain of 2.7×10−32.7 \times 10^{-3}2.7×10−3.

View on arXiv
@article{zhang2025_2504.14471,
  title={ Efficient Implicit Neural Compression of Point Clouds via Learnable Activation in Latent Space },
  author={ Yichi Zhang and Qianqian Yang },
  journal={arXiv preprint arXiv:2504.14471},
  year={ 2025 }
}
Comments on this paper