ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09721
36
0

Finding the Muses: Identifying Coresets through Loss Trajectories

12 March 2025
M. Nagaraj
Deepak Ravikumar
Efstathia Soufleri
Kaushik Roy
ArXivPDFHTML
Abstract

Deep learning models achieve state-of-the-art performance across domains but face scalability challenges in real-time or resource-constrained scenarios. To address this, we propose Loss Trajectory Correlation (LTC), a novel metric for coreset selection that identifies critical training samples driving generalization. LTCLTCLTC quantifies the alignment between training sample loss trajectories and validation set loss trajectories, enabling the construction of compact, representative subsets. Unlike traditional methods with computational and storage overheads that are infeasible to scale to large datasets, LTCLTCLTC achieves superior efficiency as it can be computed as a byproduct of training. Our results on CIFAR-100 and ImageNet-1k show that LTCLTCLTC consistently achieves accuracy on par with or surpassing state-of-the-art coreset selection methods, with any differences remaining under 1%. LTC also effectively transfers across various architectures, including ResNet, VGG, DenseNet, and Swin Transformer, with minimal performance degradation (<2%). Additionally, LTC offers insights into training dynamics, such as identifying aligned and conflicting sample behaviors, at a fraction of the computational cost of traditional methods. This framework paves the way for scalable coreset selection and efficient dataset optimization.

View on arXiv
@article{nagaraj2025_2503.09721,
  title={ Finding the Muses: Identifying Coresets through Loss Trajectories },
  author={ Manish Nagaraj and Deepak Ravikumar and Efstathia Soufleri and Kaushik Roy },
  journal={arXiv preprint arXiv:2503.09721},
  year={ 2025 }
}
Comments on this paper