SenseFlow: Scaling Distribution Matching for Flow-based Text-to-Image Distillation
- DiffM

The Distribution Matching Distillation (DMD) has been successfully applied to text-to-image diffusion models such as Stable Diffusion (SD) 1.5. However, vanilla DMD suffers from convergence difficulties on large-scale flow-based text-to-image models, such as SD 3.5 and FLUX. In this paper, we first analyze the issues when applying vanilla DMD on large-scale models. Then, to overcome the scalability challenge, we propose implicit distribution alignment (IDA) to regularize the distance between the generator and fake distribution. Furthermore, we propose intra-segment guidance (ISG) to relocate the timestep importance distribution from the teacher model. With IDA alone, DMD converges for SD 3.5; employing both IDA and ISG, DMD converges for SD 3.5 and FLUX.1 dev. Along with other improvements such as scaled up discriminator models, our final model, dubbed \textbf{SenseFlow}, achieves superior performance in distillation for both diffusion based text-to-image models such as SDXL, and flow-matching models such as SD 3.5 Large and FLUX. The source code will be avaliable atthis https URL.
View on arXiv@article{ge2025_2506.00523, title={ SenseFlow: Scaling Distribution Matching for Flow-based Text-to-Image Distillation }, author={ Xingtong Ge and Xin Zhang and Tongda Xu and Yi Zhang and Xinjie Zhang and Yan Wang and Jun Zhang }, journal={arXiv preprint arXiv:2506.00523}, year={ 2025 } }