ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22564
22
0

PRISM: Video Dataset Condensation with Progressive Refinement and Insertion for Sparse Motion

28 May 2025
Jaehyun Choi
Jiwan Hur
Gyojin Han
Jaemyung Yu
Junmo Kim
    VGen
ArXiv (abs)PDFHTML
Main:9 Pages
7 Figures
Bibliography:2 Pages
6 Tables
Appendix:5 Pages
Abstract

Video dataset condensation has emerged as a critical technique for addressing the computational challenges associated with large-scale video data processing in deep learning applications. While significant progress has been made in image dataset condensation, the video domain presents unique challenges due to the complex interplay between spatial content and temporal dynamics. This paper introduces PRISM, Progressive Refinement and Insertion for Sparse Motion, for video dataset condensation, a novel approach that fundamentally reconsiders how video data should be condensed. Unlike the previous method that separates static content from dynamic motion, our method preserves the essential interdependence between these elements. Our approach progressively refines and inserts frames to fully accommodate the motion in an action while achieving better performance but less storage, considering the relation of gradients for each frame. Extensive experiments across standard video action recognition benchmarks demonstrate that PRISM outperforms existing disentangled approaches while maintaining compact representations suitable for resource-constrained environments.

View on arXiv
@article{choi2025_2505.22564,
  title={ PRISM: Video Dataset Condensation with Progressive Refinement and Insertion for Sparse Motion },
  author={ Jaehyun Choi and Jiwan Hur and Gyojin Han and Jaemyung Yu and Junmo Kim },
  journal={arXiv preprint arXiv:2505.22564},
  year={ 2025 }
}
Comments on this paper