Learning to Integrate Diffusion ODEs by Averaging the Derivatives

To accelerate diffusion model inference, numerical solvers perform poorly at extremely small steps, while distillation techniques often introduce complexity and instability. This work presents an intermediate strategy, balancing performance and cost, by learning ODE integration using loss functions derived from the derivative-integral relationship, inspired by Monte Carlo integration and Picard iteration. From a geometric perspective, the losses operate by gradually extending the tangent to the secant, thus are named as secant losses. The secant losses can rapidly convert (via fine-tuning or distillation) a pretrained diffusion model into its secant version. In our experiments, the secant version of EDM achieves a -step FID of on CIFAR-10, while the secant version of SiT-XL/2 attains a -step FID of and an -step FID of on ImageNet-. Code will be available.
View on arXiv@article{liu2025_2505.14502, title={ Learning to Integrate Diffusion ODEs by Averaging the Derivatives }, author={ Wenze Liu and Xiangyu Yue }, journal={arXiv preprint arXiv:2505.14502}, year={ 2025 } }