VCT: Training Consistency Models with Variational Noise Coupling

Consistency Training (CT) has recently emerged as a strong alternative to diffusion models for image generation. However, non-distillation CT often suffers from high variance and instability, motivating ongoing research into its training dynamics. We propose Variational Consistency Training (VCT), a flexible and effective framework compatible with various forward kernels, including those in flow matching. Its key innovation is a learned noise-data coupling scheme inspired by Variational Autoencoders, where a data-dependent encoder models noise emission. This enables VCT to adaptively learn noise-todata pairings, reducing training variance relative to the fixed, unsorted pairings in classical CT. Experiments on multiple image datasets demonstrate significant improvements: our method surpasses baselines, achieves state-of-the-art FID among non-distillation CT approaches on CIFAR-10, and matches SoTA performance on ImageNet 64 x 64 with only two sampling steps. Code is available atthis https URL.
View on arXiv@article{silvestri2025_2502.18197, title={ VCT: Training Consistency Models with Variational Noise Coupling }, author={ Gianluigi Silvestri and Luca Ambrogioni and Chieh-Hsin Lai and Yuhta Takida and Yuki Mitsufuji }, journal={arXiv preprint arXiv:2502.18197}, year={ 2025 } }