Stable Diffusion for Data Augmentation in COCO and Weed Datasets

Generative models have increasingly impacted various tasks, from computer vision to interior design and beyond. Stable Diffusion, a powerful diffusion model, enables the creation of high-resolution images with intricate details from text prompts or reference images. An intriguing challenge lies in improving performance for small datasets with image-sparse categories. This study explores the effectiveness of Stable Diffusion by evaluating seven common categories and three widespread weed species. Synthetic images were generated using three Stable Diffusion-based techniques: Image-to-Image Translation, DreamBooth, and ControlNet, each with distinct focuses. Classification and detection tasks were then performed on these synthetic images, and their performance was compared to models trained on original images. Promising results were achieved for certain classes, demonstrating the potential of Stable Diffusion in enhancing image-sparse datasets. This foundational study may accelerate the adaptation of diffusion models across various domains.
View on arXiv@article{deng2025_2312.03996, title={ Stable Diffusion for Data Augmentation in COCO and Weed Datasets }, author={ Boyang Deng }, journal={arXiv preprint arXiv:2312.03996}, year={ 2025 } }