9
0

Provably Improving Generalization of Few-Shot Models with Synthetic Data

Abstract

Few-shot image classification remains challenging due to the scarcity of labeled training examples. Augmenting them with synthetic data has emerged as a promising way to alleviate this issue, but models trained on synthetic samples often face performance degradation due to the inherent gap between real and synthetic distributions. To address this limitation, we develop a theoretical framework that quantifies the impact of such distribution discrepancies on supervised learning, specifically in the context of image classification. More importantly, our framework suggests practical ways to generate good synthetic samples and to train a predictor with high generalization ability. Building upon this framework, we propose a novel theoretical-based algorithm that integrates prototype learning to optimize both data partitioning and model training, effectively bridging the gap between real few-shot data and synthetic data. Extensive experiments results show that our approach demonstrates superior performance compared to state-of-the-art methods, outperforming them across multiple datasets.

View on arXiv
@article{nguyen2025_2505.24190,
  title={ Provably Improving Generalization of Few-Shot Models with Synthetic Data },
  author={ Lan-Cuong Nguyen and Quan Nguyen-Tri and Bang Tran Khanh and Dung D. Le and Long Tran-Thanh and Khoat Than },
  journal={arXiv preprint arXiv:2505.24190},
  year={ 2025 }
}
Comments on this paper