15
0

MARché: Fast Masked Autoregressive Image Generation with Cache-Aware Attention

Main:9 Pages
14 Figures
Bibliography:4 Pages
4 Tables
Appendix:6 Pages
Abstract

Masked autoregressive (MAR) models unify the strengths of masked and autoregressive generation by predicting tokens in a fixed order using bidirectional attention for image generation. While effective, MAR models suffer from significant computational overhead, as they recompute attention and feed-forward representations for all tokens at every decoding step, despite most tokens remaining semantically stable across steps. We propose a training-free generation framework MARché to address this inefficiency through two key components: cache-aware attention and selective KV refresh. Cache-aware attention partitions tokens into active and cached sets, enabling separate computation paths that allow efficient reuse of previously computed key/value projections without compromising full-context modeling. But a cached token cannot be used indefinitely without recomputation due to the changing contextual information over multiple steps. MARché recognizes this challenge and applies a technique called selective KV refresh. Selective KV refresh identifies contextually relevant tokens based on attention scores from newly generated tokens and updates only those tokens that require recomputation, while preserving image generation quality. MARché significantly reduces redundant computation in MAR without modifying the underlying architecture. Empirically, MARché achieves up to 1.7x speedup with negligible impact on image quality, offering a scalable and broadly applicable solution for efficient masked transformer generation.

View on arXiv
@article{jiang2025_2506.12035,
  title={ MARché: Fast Masked Autoregressive Image Generation with Cache-Aware Attention },
  author={ Chaoyi Jiang and Sungwoo Kim and Lei Gao and Hossein Entezari Zarch and Won Woo Ro and Murali Annavaram },
  journal={arXiv preprint arXiv:2506.12035},
  year={ 2025 }
}
Comments on this paper