ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15682
7
0

Evolutionary Caching to Accelerate Your Off-the-Shelf Diffusion Model

18 June 2025
Anirud Aggarwal
Abhinav Shrivastava
Matthew Gwilliam
Author Contacts:
anirud@umd.eduabhinav@cs.umd.edumgwillia@umd.edu
ArXiv (abs)PDFHTML
Main:11 Pages
26 Figures
Bibliography:1 Pages
12 Tables
Appendix:17 Pages
Abstract

Diffusion-based image generation models excel at producing high-quality synthetic content, but suffer from slow and computationally expensive inference. Prior work has attempted to mitigate this by caching and reusing features within diffusion transformers across inference steps. These methods, however, often rely on rigid heuristics that result in limited acceleration or poor generalization across architectures. We propose Evolutionary Caching to Accelerate Diffusion models (ECAD), a genetic algorithm that learns efficient, per-model, caching schedules forming a Pareto frontier, using only a small set of calibration prompts. ECAD requires no modifications to network parameters or reference images. It offers significant inference speedups, enables fine-grained control over the quality-latency trade-off, and adapts seamlessly to different diffusion models. Notably, ECAD's learned schedules can generalize effectively to resolutions and model variants not seen during calibration. We evaluate ECAD on PixArt-alpha, PixArt-Sigma, andthis http URLusing multiple metrics (FID, CLIP, Image Reward) across diverse benchmarks (COCO, MJHQ-30k, PartiPrompts), demonstrating consistent improvements over previous approaches. On PixArt-alpha, ECAD identifies a schedule that outperforms the previous state-of-the-art method by 4.47 COCO FID while increasing inference speedup from 2.35x to 2.58x. Our results establish ECAD as a scalable and generalizable approach for accelerating diffusion inference. Our project website is available atthis https URLand our code is available atthis https URL.

View on arXiv
@article{aggarwal2025_2506.15682,
  title={ Evolutionary Caching to Accelerate Your Off-the-Shelf Diffusion Model },
  author={ Anirud Aggarwal and Abhinav Shrivastava and Matthew Gwilliam },
  journal={arXiv preprint arXiv:2506.15682},
  year={ 2025 }
}
Comments on this paper