Attend to Not Attended: Structure-then-Detail Token Merging for Post-training DiT Acceleration

Diffusion transformers have shown exceptional performance in visual generation but incur high computational costs. Token reduction techniques that compress models by sharing the denoising process among similar tokens have been introduced. However, existing approaches neglect the denoising priors of the diffusion models, leading to suboptimal acceleration and diminished image quality. This study proposes a novel concept: attend to prune feature redundancies in areas not attended by the diffusion process. We analyze the location and degree of feature redundancies based on the structure-then-detail denoising priors. Subsequently, we introduce SDTM, a structure-then-detail token merging approach that dynamically compresses feature redundancies. Specifically, we design dynamic visual token merging, compression ratio adjusting, and prompt reweighting for different stages. Served in a post-training way, the proposed method can be integrated seamlessly into any DiT architecture. Extensive experiments across various backbones, schedulers, and datasets showcase the superiority of our method, for example, it achieves 1.55 times acceleration with negligible impact on image quality. Project page:this https URL.
View on arXiv@article{fang2025_2505.11707, title={ Attend to Not Attended: Structure-then-Detail Token Merging for Post-training DiT Acceleration }, author={ Haipeng Fang and Sheng Tang and Juan Cao and Enshuo Zhang and Fan Tang and Tong-Yee Lee }, journal={arXiv preprint arXiv:2505.11707}, year={ 2025 } }