ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.01754
68
0

Efficient Compression of Sparse Accelerator Data Using Implicit Neural Representations and Importance Sampling

2 December 2024
Xihaier Luo
Samuel Lurvey
Yi Huang
Yihui Ren
Jin-zhi Huang
Byung-Jun Yoon
ArXivPDFHTML
Abstract

High-energy, large-scale particle colliders in nuclear and high-energy physics generate data at extraordinary rates, reaching up to 111 terabyte and several petabytes per second, respectively. The development of real-time, high-throughput data compression algorithms capable of reducing this data to manageable sizes for permanent storage is of paramount importance. A unique characteristic of the tracking detector data is the extreme sparsity of particle trajectories in space, with an occupancy rate ranging from approximately 10−610^{-6}10−6 to 10%10\%10%. Furthermore, for downstream tasks, a continuous representation of this data is often more useful than a voxel-based, discrete representation due to the inherently continuous nature of the signals involved. To address these challenges, we propose a novel approach using implicit neural representations for data learning and compression. We also introduce an importance sampling technique to accelerate the network training process. Our method is competitive with traditional compression algorithms, such as MGARD, SZ, and ZFP, while offering significant speed-ups and maintaining negligible accuracy loss through our importance sampling strategy.

View on arXiv
Comments on this paper