Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2505.20353
Cited By
FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation
26 May 2025
Dong Liu
Jiayi Zhang
Yifan Li
Yanxuan Yu
Ben Lengerich
Ying Nian Wu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation"
15 / 15 papers shown
Title
BlockDance: Reuse Structurally Similar Spatio-Temporal Features to Accelerate Diffusion Transformers
Hui Zhang
Tingwei Gao
Jie Shao
Zuxuan Wu
97
2
0
20 Mar 2025
FasterCache: Training-Free Video Diffusion Model Acceleration with High Quality
Zhengyao Lv
Chenyang Si
Junhao Song
Zhenyu Yang
Ping Luo
Ziwei Liu
Kwan-Yee K. Wong
VGen
DiffM
115
15
0
13 Mar 2025
Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model
Feng Liu
Shiwei Zhang
Xiaofeng Wang
Yujie Wei
Haonan Qiu
Yuzhong Zhao
Yingya Zhang
Qixiang Ye
Fang Wan
VGen
AI4TS
169
22
0
28 Nov 2024
Real-Time Video Generation with Pyramid Attention Broadcast
Xuanlei Zhao
Xiaolong Jin
Kai Wang
Yang You
VGen
DiffM
114
40
0
22 Aug 2024
Latte: Latent Diffusion Transformer for Video Generation
Xin Ma
Yaohui Wang
Gengyun Jia
Xinyuan Chen
Ziqiang Liu
Yuan-Fang Li
Cunjian Chen
Yu Qiao
DiffM
VGen
212
269
0
05 Jan 2024
H
2
_2
2
O: Heavy-Hitter Oracle for Efficient Generative Inference of Large Language Models
Zhenyu Zhang
Ying Sheng
Dinesh Manocha
Tianlong Chen
Lianmin Zheng
...
Yuandong Tian
Christopher Ré
Clark W. Barrett
Zhangyang Wang
Beidi Chen
VLM
126
289
0
24 Jun 2023
Token Merging for Fast Stable Diffusion
Daniel Bolya
Judy Hoffman
54
107
0
30 Mar 2023
Scalable Diffusion Models with Transformers
William S. Peebles
Saining Xie
GNN
86
2,298
0
19 Dec 2022
Token Merging: Your ViT But Faster
Daniel Bolya
Cheng-Yang Fu
Xiaoliang Dai
Peizhao Zhang
Christoph Feichtenhofer
Judy Hoffman
MoMe
95
454
0
17 Oct 2022
IA-RED
2
^2
2
: Interpretability-Aware Redundancy Reduction for Vision Transformers
Bowen Pan
Yikang Shen
Yi Ding
Zhangyang Wang
Rogerio Feris
A. Oliva
VLM
ViT
82
160
0
23 Jun 2021
DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
Yongming Rao
Wenliang Zhao
Benlin Liu
Jiwen Lu
Jie Zhou
Cho-Jui Hsieh
ViT
76
697
0
03 Jun 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
655
6,059
0
29 Apr 2021
SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning
Hanrui Wang
Zhekai Zhang
Song Han
103
390
0
17 Dec 2020
Rethinking the Value of Network Pruning
Zhuang Liu
Mingjie Sun
Tinghui Zhou
Gao Huang
Trevor Darrell
36
1,471
0
11 Oct 2018
SkipNet: Learning Dynamic Routing in Convolutional Networks
Xin Wang
Feng Yu
Zi-Yi Dou
Trevor Darrell
Joseph E. Gonzalez
97
635
0
26 Nov 2017
1