25
0

Data Balancing Strategies: A Survey of Resampling and Augmentation Methods

Main:43 Pages
16 Figures
Bibliography:4 Pages
5 Tables
Abstract

Imbalanced data poses a significant obstacle in machine learning, as an unequal distribution of class labels often results in skewed predictions and diminished model accuracy. To mitigate this problem, various resampling strategies have been developed, encompassing both oversampling and undersampling techniques aimed at modifying class proportions. Conventional oversampling approaches like SMOTE enhance the representation of the minority class, whereas undersampling methods focus on trimming down the majority class. Advances in deep learning have facilitated the creation of more complex solutions, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), which are capable of producing high-quality synthetic examples. This paper reviews a broad spectrum of data balancing methods, classifying them into categories including synthetic oversampling, adaptive techniques, generative models, ensemble-based strategies, hybrid approaches, undersampling, and neighbor-based methods. Furthermore, it highlights current developments in resampling techniques and discusses practical implementations and case studies that validate their effectiveness. The paper concludes by offering perspectives on potential directions for future exploration in this domain.

View on arXiv
@article{yousefimehr2025_2505.13518,
  title={ Data Balancing Strategies: A Survey of Resampling and Augmentation Methods },
  author={ Behnam Yousefimehr and Mehdi Ghatee and Mohammad Amin Seifi and Javad Fazli and Sajed Tavakoli and Zahra Rafei and Shervin Ghaffari and Abolfazl Nikahd and Mahdi Razi Gandomani and Alireza Orouji and Ramtin Mahmoudi Kashani and Sarina Heshmati and Negin Sadat Mousavi },
  journal={arXiv preprint arXiv:2505.13518},
  year={ 2025 }
}
Comments on this paper