2
0

Learning to Integrate Diffusion ODEs by Averaging the Derivatives

Abstract

To accelerate diffusion model inference, numerical solvers perform poorly at extremely small steps, while distillation techniques often introduce complexity and instability. This work presents an intermediate strategy, balancing performance and cost, by learning ODE integration using loss functions derived from the derivative-integral relationship, inspired by Monte Carlo integration and Picard iteration. From a geometric perspective, the losses operate by gradually extending the tangent to the secant, thus are named as secant losses. The secant losses can rapidly convert (via fine-tuning or distillation) a pretrained diffusion model into its secant version. In our experiments, the secant version of EDM achieves a 1010-step FID of 2.142.14 on CIFAR-10, while the secant version of SiT-XL/2 attains a 44-step FID of 2.272.27 and an 88-step FID of 1.961.96 on ImageNet-256×256256\times256. Code will be available.

View on arXiv
@article{liu2025_2505.14502,
  title={ Learning to Integrate Diffusion ODEs by Averaging the Derivatives },
  author={ Wenze Liu and Xiangyu Yue },
  journal={arXiv preprint arXiv:2505.14502},
  year={ 2025 }
}
Comments on this paper