45
0

Synthetic Data Augmentation using Pre-trained Diffusion Models for Long-tailed Food Image Classification

Main:8 Pages
4 Figures
Bibliography:2 Pages
3 Tables
Abstract

Deep learning-based food image classification enables precise identification of food categories, further facilitating accurate nutritional analysis. However, real-world food images often show a skewed distribution, with some food types being more prevalent than others. This class imbalance can be problematic, causing models to favor the majority (head) classes with overall performance degradation for the less common (tail) classes. Recently, synthetic data augmentation using diffusion-based generative models has emerged as a promising solution to address this issue. By generating high-quality synthetic images, these models can help uniformize the data distribution, potentially improving classification performance. However, existing approaches face challenges: fine-tuning-based methods need a uniformly distributed dataset, while pre-trained model-based approaches often overlook inter-class separation in synthetic data. In this paper, we propose a two-stage synthetic data augmentation framework, leveraging pre-trained diffusion models for long-tailed food classification. We generate a reference set conditioned by a positive prompt on the generation target and then select a class that shares similar features with the generation target as a negative prompt. Subsequently, we generate a synthetic augmentation set using positive and negative prompt conditions by a combined sampling strategy that promotes intra-class diversity and inter-class separation. We demonstrate the efficacy of the proposed method on two long-tailed food benchmark datasets, achieving superior performance compared to previous works in terms of top-1 accuracy.

View on arXiv
@article{koh2025_2506.01368,
  title={ Synthetic Data Augmentation using Pre-trained Diffusion Models for Long-tailed Food Image Classification },
  author={ GaYeon Koh and Hyun-Jic Oh and Jeonghyun Noh and Won-Ki Jeong },
  journal={arXiv preprint arXiv:2506.01368},
  year={ 2025 }
}
Comments on this paper