ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02488
48
0
v1v2 (latest)

Flexiffusion: Training-Free Segment-Wise Neural Architecture Search for Efficient Diffusion Models

3 June 2025
Hongtao Huang
Xiaojun Chang
L. Yao
    MedIm
ArXiv (abs)PDFHTML
Main:8 Pages
14 Figures
Bibliography:2 Pages
10 Tables
Appendix:6 Pages
Abstract

Diffusion models (DMs) are powerful generative models capable of producing high-fidelity images but are constrained by high computational costs due to iterative multi-step inference. While Neural Architecture Search (NAS) can optimize DMs, existing methods are hindered by retraining requirements, exponential search complexity from step-wise optimization, and slow evaluation relying on massive image generation. To address these challenges, we propose Flexiffusion, a training-free NAS framework that jointly optimizes generation schedules and model architectures without modifying pre-trained parameters. Our key insight is to decompose the generation process into flexible segments of equal length, where each segment dynamically combines three step types: full (complete computation), partial (cache-reused computation), and null (skipped computation). This segment-wise search space reduces the candidate pool exponentially compared to step-wise NAS while preserving architectural diversity. Further, we introduce relative FID (rFID), a lightweight evaluation metric for NAS that measures divergence from a teacher model's outputs instead of ground truth, slashing evaluation time by over 90%90\%90%. In practice, Flexiffusion achieves at least 2×2\times2× acceleration across LDMs, Stable Diffusion, and DDPMs on ImageNet and MS-COCO, with FID degradation under 5%5\%5%, outperforming prior NAS and caching methods. Notably, it attains 5.1×5.1\times5.1× speedup on Stable Diffusion with near-identical CLIP scores. Our work pioneers a resource-efficient paradigm for searching high-speed DMs without sacrificing quality.

View on arXiv
@article{huang2025_2506.02488,
  title={ Flexiffusion: Training-Free Segment-Wise Neural Architecture Search for Efficient Diffusion Models },
  author={ Hongtao Huang and Xiaojun Chang and Lina Yao },
  journal={arXiv preprint arXiv:2506.02488},
  year={ 2025 }
}
Comments on this paper